00:00:00.001 Started by upstream project "autotest-per-patch" build number 132131 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.130 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.131 The recommended git tool is: git 00:00:00.131 using credential 00000000-0000-0000-0000-000000000002 00:00:00.133 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.199 Fetching changes from the remote Git repository 00:00:00.226 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.268 Using shallow fetch with depth 1 00:00:00.268 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.268 > git --version # timeout=10 00:00:00.298 > git --version # 'git version 2.39.2' 00:00:00.298 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.313 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.313 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.217 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.228 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.240 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:07.240 > git config core.sparsecheckout # timeout=10 00:00:07.249 > git read-tree -mu HEAD # timeout=10 00:00:07.266 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:07.286 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:07.286 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:07.367 [Pipeline] Start of Pipeline 00:00:07.382 [Pipeline] library 00:00:07.384 Loading library shm_lib@master 00:00:07.384 Library shm_lib@master is cached. Copying from home. 00:00:07.399 [Pipeline] node 00:00:07.407 Running on CYP13 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.409 [Pipeline] { 00:00:07.416 [Pipeline] catchError 00:00:07.417 [Pipeline] { 00:00:07.425 [Pipeline] wrap 00:00:07.431 [Pipeline] { 00:00:07.437 [Pipeline] stage 00:00:07.439 [Pipeline] { (Prologue) 00:00:07.662 [Pipeline] sh 00:00:07.947 + logger -p user.info -t JENKINS-CI 00:00:07.967 [Pipeline] echo 00:00:07.968 Node: CYP13 00:00:07.975 [Pipeline] sh 00:00:08.280 [Pipeline] setCustomBuildProperty 00:00:08.292 [Pipeline] echo 00:00:08.294 Cleanup processes 00:00:08.300 [Pipeline] sh 00:00:08.593 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.593 3444444 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.608 [Pipeline] sh 00:00:08.900 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.900 ++ grep -v 'sudo pgrep' 00:00:08.900 ++ awk '{print $1}' 00:00:08.900 + sudo kill -9 00:00:08.900 + true 00:00:08.915 [Pipeline] cleanWs 00:00:08.925 [WS-CLEANUP] Deleting project workspace... 00:00:08.925 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.932 [WS-CLEANUP] done 00:00:08.936 [Pipeline] setCustomBuildProperty 00:00:08.949 [Pipeline] sh 00:00:09.235 + sudo git config --global --replace-all safe.directory '*' 00:00:09.329 [Pipeline] httpRequest 00:00:10.373 [Pipeline] echo 00:00:10.375 Sorcerer 10.211.164.101 is alive 00:00:10.384 [Pipeline] retry 00:00:10.386 [Pipeline] { 00:00:10.396 [Pipeline] httpRequest 00:00:10.400 HttpMethod: GET 00:00:10.400 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.401 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.413 Response Code: HTTP/1.1 200 OK 00:00:10.413 Success: Status code 200 is in the accepted range: 200,404 00:00:10.414 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:12.581 [Pipeline] } 00:00:12.598 [Pipeline] // retry 00:00:12.605 [Pipeline] sh 00:00:12.894 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:12.910 [Pipeline] httpRequest 00:00:13.316 [Pipeline] echo 00:00:13.317 Sorcerer 10.211.164.101 is alive 00:00:13.327 [Pipeline] retry 00:00:13.329 [Pipeline] { 00:00:13.342 [Pipeline] httpRequest 00:00:13.347 HttpMethod: GET 00:00:13.347 URL: http://10.211.164.101/packages/spdk_924c8133b55b8c60c79ed163df84163015e8bcb4.tar.gz 00:00:13.348 Sending request to url: http://10.211.164.101/packages/spdk_924c8133b55b8c60c79ed163df84163015e8bcb4.tar.gz 00:00:13.370 Response Code: HTTP/1.1 200 OK 00:00:13.371 Success: Status code 200 is in the accepted range: 200,404 00:00:13.371 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_924c8133b55b8c60c79ed163df84163015e8bcb4.tar.gz 00:00:51.113 [Pipeline] } 00:00:51.129 [Pipeline] // retry 00:00:51.136 [Pipeline] sh 00:00:51.425 + tar --no-same-owner -xf spdk_924c8133b55b8c60c79ed163df84163015e8bcb4.tar.gz 00:00:54.739 [Pipeline] sh 00:00:55.026 + git -C spdk log --oneline -n5 00:00:55.026 924c8133b nvmf: Add no_metadata option to nvmf_subsystem_add_ns 00:00:55.026 d5ad1ab70 nvmf: Get metadata config by not bdev but bdev_desc 00:00:55.027 40c30569f bdevperf: Add no_metadata option 00:00:55.027 3351abe6a bdevperf: Get metadata config by not bdev but bdev_desc 00:00:55.027 8f46604d4 bdevperf: g_main_thread calls bdev_open() instead of job->thread 00:00:55.038 [Pipeline] } 00:00:55.050 [Pipeline] // stage 00:00:55.058 [Pipeline] stage 00:00:55.060 [Pipeline] { (Prepare) 00:00:55.074 [Pipeline] writeFile 00:00:55.088 [Pipeline] sh 00:00:55.376 + logger -p user.info -t JENKINS-CI 00:00:55.391 [Pipeline] sh 00:00:55.678 + logger -p user.info -t JENKINS-CI 00:00:55.690 [Pipeline] sh 00:00:55.978 + cat autorun-spdk.conf 00:00:55.978 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.978 SPDK_TEST_NVMF=1 00:00:55.978 SPDK_TEST_NVME_CLI=1 00:00:55.978 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:55.978 SPDK_TEST_NVMF_NICS=e810 00:00:55.978 SPDK_TEST_VFIOUSER=1 00:00:55.978 SPDK_RUN_UBSAN=1 00:00:55.978 NET_TYPE=phy 00:00:55.986 RUN_NIGHTLY=0 00:00:55.991 [Pipeline] readFile 00:00:56.019 [Pipeline] withEnv 00:00:56.021 [Pipeline] { 00:00:56.035 [Pipeline] sh 00:00:56.324 + set -ex 00:00:56.324 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:56.324 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:56.324 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.324 ++ SPDK_TEST_NVMF=1 00:00:56.324 ++ SPDK_TEST_NVME_CLI=1 00:00:56.324 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:56.324 ++ SPDK_TEST_NVMF_NICS=e810 00:00:56.324 ++ SPDK_TEST_VFIOUSER=1 00:00:56.324 ++ SPDK_RUN_UBSAN=1 00:00:56.324 ++ NET_TYPE=phy 00:00:56.324 ++ RUN_NIGHTLY=0 00:00:56.324 + case $SPDK_TEST_NVMF_NICS in 00:00:56.324 + DRIVERS=ice 00:00:56.324 + [[ tcp == \r\d\m\a ]] 00:00:56.324 + [[ -n ice ]] 00:00:56.324 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:56.324 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:56.324 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:56.324 rmmod: ERROR: Module irdma is not currently loaded 00:00:56.324 rmmod: ERROR: Module i40iw is not currently loaded 00:00:56.324 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:56.324 + true 00:00:56.324 + for D in $DRIVERS 00:00:56.324 + sudo modprobe ice 00:00:56.324 + exit 0 00:00:56.334 [Pipeline] } 00:00:56.349 [Pipeline] // withEnv 00:00:56.355 [Pipeline] } 00:00:56.369 [Pipeline] // stage 00:00:56.378 [Pipeline] catchError 00:00:56.380 [Pipeline] { 00:00:56.393 [Pipeline] timeout 00:00:56.393 Timeout set to expire in 1 hr 0 min 00:00:56.395 [Pipeline] { 00:00:56.408 [Pipeline] stage 00:00:56.410 [Pipeline] { (Tests) 00:00:56.425 [Pipeline] sh 00:00:56.714 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:56.714 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:56.714 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:56.714 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:56.714 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:56.714 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:56.714 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:56.714 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:56.714 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:56.714 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:56.714 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:56.714 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:56.714 + source /etc/os-release 00:00:56.714 ++ NAME='Fedora Linux' 00:00:56.714 ++ VERSION='39 (Cloud Edition)' 00:00:56.714 ++ ID=fedora 00:00:56.714 ++ VERSION_ID=39 00:00:56.714 ++ VERSION_CODENAME= 00:00:56.714 ++ PLATFORM_ID=platform:f39 00:00:56.714 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:56.714 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:56.714 ++ LOGO=fedora-logo-icon 00:00:56.714 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:56.714 ++ HOME_URL=https://fedoraproject.org/ 00:00:56.714 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:56.714 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:56.714 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:56.714 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:56.714 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:56.714 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:56.714 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:56.714 ++ SUPPORT_END=2024-11-12 00:00:56.714 ++ VARIANT='Cloud Edition' 00:00:56.714 ++ VARIANT_ID=cloud 00:00:56.714 + uname -a 00:00:56.714 Linux spdk-cyp-13 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:56.715 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:00.017 Hugepages 00:01:00.017 node hugesize free / total 00:01:00.017 node0 1048576kB 0 / 0 00:01:00.017 node0 2048kB 0 / 0 00:01:00.017 node1 1048576kB 0 / 0 00:01:00.017 node1 2048kB 0 / 0 00:01:00.017 00:01:00.017 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:00.017 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:00.017 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:00.017 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:00.017 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:00.017 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:00.017 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:00.017 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:00.017 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:00.017 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:00.017 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:00.017 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:00.017 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:00.017 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:00.017 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:00.017 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:00.017 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:00.017 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:00.017 + rm -f /tmp/spdk-ld-path 00:01:00.017 + source autorun-spdk.conf 00:01:00.017 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.017 ++ SPDK_TEST_NVMF=1 00:01:00.017 ++ SPDK_TEST_NVME_CLI=1 00:01:00.017 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.017 ++ SPDK_TEST_NVMF_NICS=e810 00:01:00.017 ++ SPDK_TEST_VFIOUSER=1 00:01:00.017 ++ SPDK_RUN_UBSAN=1 00:01:00.017 ++ NET_TYPE=phy 00:01:00.017 ++ RUN_NIGHTLY=0 00:01:00.017 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:00.017 + [[ -n '' ]] 00:01:00.017 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:00.017 + for M in /var/spdk/build-*-manifest.txt 00:01:00.017 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:00.017 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:00.017 + for M in /var/spdk/build-*-manifest.txt 00:01:00.017 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:00.017 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:00.017 + for M in /var/spdk/build-*-manifest.txt 00:01:00.017 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:00.017 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:00.017 ++ uname 00:01:00.017 + [[ Linux == \L\i\n\u\x ]] 00:01:00.017 + sudo dmesg -T 00:01:00.017 + sudo dmesg --clear 00:01:00.017 + dmesg_pid=3445425 00:01:00.017 + [[ Fedora Linux == FreeBSD ]] 00:01:00.017 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:00.017 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:00.017 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:00.017 + [[ -x /usr/src/fio-static/fio ]] 00:01:00.017 + export FIO_BIN=/usr/src/fio-static/fio 00:01:00.017 + FIO_BIN=/usr/src/fio-static/fio 00:01:00.017 + sudo dmesg -Tw 00:01:00.017 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:00.017 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:00.017 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:00.017 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:00.017 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:00.017 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:00.017 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:00.017 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:00.017 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:00.279 15:13:18 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:00.279 15:13:18 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:00.279 15:13:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.279 15:13:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:00.279 15:13:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:00.279 15:13:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.279 15:13:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:00.279 15:13:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:00.279 15:13:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:00.279 15:13:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:00.279 15:13:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:00.279 15:13:18 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:00.279 15:13:18 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:00.279 15:13:18 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:00.279 15:13:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:00.279 15:13:18 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:00.279 15:13:18 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:00.279 15:13:18 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:00.279 15:13:18 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:00.279 15:13:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:00.279 15:13:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:00.279 15:13:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:00.279 15:13:18 -- paths/export.sh@5 -- $ export PATH 00:01:00.279 15:13:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:00.279 15:13:18 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:00.279 15:13:18 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:00.279 15:13:18 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730902398.XXXXXX 00:01:00.279 15:13:18 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730902398.7VNQBE 00:01:00.279 15:13:18 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:00.279 15:13:18 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:00.279 15:13:18 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:00.279 15:13:18 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:00.279 15:13:18 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:00.279 15:13:18 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:00.279 15:13:18 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:00.279 15:13:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:00.279 15:13:18 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:00.279 15:13:18 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:00.279 15:13:18 -- pm/common@17 -- $ local monitor 00:01:00.279 15:13:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:00.279 15:13:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:00.279 15:13:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:00.279 15:13:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:00.279 15:13:18 -- pm/common@21 -- $ date +%s 00:01:00.279 15:13:18 -- pm/common@25 -- $ sleep 1 00:01:00.279 15:13:18 -- pm/common@21 -- $ date +%s 00:01:00.279 15:13:18 -- pm/common@21 -- $ date +%s 00:01:00.279 15:13:18 -- pm/common@21 -- $ date +%s 00:01:00.279 15:13:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730902398 00:01:00.279 15:13:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730902398 00:01:00.279 15:13:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730902398 00:01:00.279 15:13:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730902398 00:01:00.279 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730902398_collect-vmstat.pm.log 00:01:00.279 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730902398_collect-cpu-load.pm.log 00:01:00.279 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730902398_collect-cpu-temp.pm.log 00:01:00.279 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730902398_collect-bmc-pm.bmc.pm.log 00:01:01.222 15:13:19 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:01.222 15:13:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:01.222 15:13:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:01.222 15:13:19 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:01.222 15:13:19 -- spdk/autobuild.sh@16 -- $ date -u 00:01:01.222 Wed Nov 6 02:13:19 PM UTC 2024 00:01:01.222 15:13:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:01.222 v25.01-pre-194-g924c8133b 00:01:01.222 15:13:19 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:01.223 15:13:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:01.223 15:13:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:01.223 15:13:19 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:01.223 15:13:19 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:01.223 15:13:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.483 ************************************ 00:01:01.483 START TEST ubsan 00:01:01.483 ************************************ 00:01:01.483 15:13:19 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:01.483 using ubsan 00:01:01.483 00:01:01.483 real 0m0.001s 00:01:01.483 user 0m0.000s 00:01:01.483 sys 0m0.000s 00:01:01.483 15:13:19 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:01.483 15:13:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:01.483 ************************************ 00:01:01.483 END TEST ubsan 00:01:01.483 ************************************ 00:01:01.483 15:13:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:01.483 15:13:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:01.483 15:13:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:01.483 15:13:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:01.483 15:13:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:01.483 15:13:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:01.483 15:13:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:01.483 15:13:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:01.483 15:13:19 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:01.483 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:01.483 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:02.052 Using 'verbs' RDMA provider 00:01:17.898 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:30.124 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:30.647 Creating mk/config.mk...done. 00:01:30.647 Creating mk/cc.flags.mk...done. 00:01:30.647 Type 'make' to build. 00:01:30.647 15:13:48 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:30.647 15:13:48 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:30.647 15:13:48 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:30.647 15:13:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.907 ************************************ 00:01:30.907 START TEST make 00:01:30.907 ************************************ 00:01:30.907 15:13:48 make -- common/autotest_common.sh@1127 -- $ make -j144 00:01:31.168 make[1]: Nothing to be done for 'all'. 00:01:32.573 The Meson build system 00:01:32.573 Version: 1.5.0 00:01:32.573 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:32.573 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.573 Build type: native build 00:01:32.573 Project name: libvfio-user 00:01:32.573 Project version: 0.0.1 00:01:32.573 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:32.573 C linker for the host machine: cc ld.bfd 2.40-14 00:01:32.573 Host machine cpu family: x86_64 00:01:32.573 Host machine cpu: x86_64 00:01:32.573 Run-time dependency threads found: YES 00:01:32.573 Library dl found: YES 00:01:32.573 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:32.573 Run-time dependency json-c found: YES 0.17 00:01:32.573 Run-time dependency cmocka found: YES 1.1.7 00:01:32.573 Program pytest-3 found: NO 00:01:32.573 Program flake8 found: NO 00:01:32.573 Program misspell-fixer found: NO 00:01:32.573 Program restructuredtext-lint found: NO 00:01:32.573 Program valgrind found: YES (/usr/bin/valgrind) 00:01:32.573 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:32.573 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:32.573 Compiler for C supports arguments -Wwrite-strings: YES 00:01:32.573 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:32.573 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:32.573 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:32.573 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:32.573 Build targets in project: 8 00:01:32.573 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:32.573 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:32.573 00:01:32.573 libvfio-user 0.0.1 00:01:32.573 00:01:32.573 User defined options 00:01:32.573 buildtype : debug 00:01:32.573 default_library: shared 00:01:32.573 libdir : /usr/local/lib 00:01:32.573 00:01:32.573 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:33.142 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:33.142 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:33.142 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:33.142 [3/37] Compiling C object samples/null.p/null.c.o 00:01:33.142 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:33.142 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:33.142 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:33.401 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:33.401 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:33.401 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:33.401 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:33.401 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:33.401 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:33.401 [13/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:33.401 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:33.401 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:33.401 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:33.401 [17/37] Compiling C object samples/server.p/server.c.o 00:01:33.401 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:33.401 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:33.401 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:33.401 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:33.401 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:33.401 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:33.401 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:33.401 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:33.401 [26/37] Compiling C object samples/client.p/client.c.o 00:01:33.401 [27/37] Linking target samples/client 00:01:33.401 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:33.401 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:33.401 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:33.401 [31/37] Linking target test/unit_tests 00:01:33.661 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:33.661 [33/37] Linking target samples/server 00:01:33.661 [34/37] Linking target samples/null 00:01:33.661 [35/37] Linking target samples/gpio-pci-idio-16 00:01:33.661 [36/37] Linking target samples/lspci 00:01:33.661 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:33.661 INFO: autodetecting backend as ninja 00:01:33.661 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:33.661 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:33.920 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:33.920 ninja: no work to do. 00:01:40.592 The Meson build system 00:01:40.592 Version: 1.5.0 00:01:40.592 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:40.592 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:40.592 Build type: native build 00:01:40.592 Program cat found: YES (/usr/bin/cat) 00:01:40.592 Project name: DPDK 00:01:40.592 Project version: 24.03.0 00:01:40.592 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:40.592 C linker for the host machine: cc ld.bfd 2.40-14 00:01:40.592 Host machine cpu family: x86_64 00:01:40.592 Host machine cpu: x86_64 00:01:40.592 Message: ## Building in Developer Mode ## 00:01:40.592 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:40.592 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:40.592 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:40.592 Program python3 found: YES (/usr/bin/python3) 00:01:40.592 Program cat found: YES (/usr/bin/cat) 00:01:40.592 Compiler for C supports arguments -march=native: YES 00:01:40.592 Checking for size of "void *" : 8 00:01:40.592 Checking for size of "void *" : 8 (cached) 00:01:40.592 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:40.592 Library m found: YES 00:01:40.592 Library numa found: YES 00:01:40.592 Has header "numaif.h" : YES 00:01:40.592 Library fdt found: NO 00:01:40.592 Library execinfo found: NO 00:01:40.592 Has header "execinfo.h" : YES 00:01:40.592 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:40.592 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:40.592 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:40.592 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:40.592 Run-time dependency openssl found: YES 3.1.1 00:01:40.592 Run-time dependency libpcap found: YES 1.10.4 00:01:40.592 Has header "pcap.h" with dependency libpcap: YES 00:01:40.592 Compiler for C supports arguments -Wcast-qual: YES 00:01:40.592 Compiler for C supports arguments -Wdeprecated: YES 00:01:40.592 Compiler for C supports arguments -Wformat: YES 00:01:40.592 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:40.592 Compiler for C supports arguments -Wformat-security: NO 00:01:40.592 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:40.592 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:40.592 Compiler for C supports arguments -Wnested-externs: YES 00:01:40.592 Compiler for C supports arguments -Wold-style-definition: YES 00:01:40.592 Compiler for C supports arguments -Wpointer-arith: YES 00:01:40.592 Compiler for C supports arguments -Wsign-compare: YES 00:01:40.592 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:40.592 Compiler for C supports arguments -Wundef: YES 00:01:40.592 Compiler for C supports arguments -Wwrite-strings: YES 00:01:40.592 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:40.592 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:40.592 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:40.592 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:40.592 Program objdump found: YES (/usr/bin/objdump) 00:01:40.592 Compiler for C supports arguments -mavx512f: YES 00:01:40.592 Checking if "AVX512 checking" compiles: YES 00:01:40.592 Fetching value of define "__SSE4_2__" : 1 00:01:40.592 Fetching value of define "__AES__" : 1 00:01:40.592 Fetching value of define "__AVX__" : 1 00:01:40.592 Fetching value of define "__AVX2__" : 1 00:01:40.592 Fetching value of define "__AVX512BW__" : 1 00:01:40.592 Fetching value of define "__AVX512CD__" : 1 00:01:40.592 Fetching value of define "__AVX512DQ__" : 1 00:01:40.592 Fetching value of define "__AVX512F__" : 1 00:01:40.592 Fetching value of define "__AVX512VL__" : 1 00:01:40.592 Fetching value of define "__PCLMUL__" : 1 00:01:40.592 Fetching value of define "__RDRND__" : 1 00:01:40.592 Fetching value of define "__RDSEED__" : 1 00:01:40.592 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:40.592 Fetching value of define "__znver1__" : (undefined) 00:01:40.592 Fetching value of define "__znver2__" : (undefined) 00:01:40.592 Fetching value of define "__znver3__" : (undefined) 00:01:40.592 Fetching value of define "__znver4__" : (undefined) 00:01:40.592 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:40.592 Message: lib/log: Defining dependency "log" 00:01:40.592 Message: lib/kvargs: Defining dependency "kvargs" 00:01:40.592 Message: lib/telemetry: Defining dependency "telemetry" 00:01:40.592 Checking for function "getentropy" : NO 00:01:40.592 Message: lib/eal: Defining dependency "eal" 00:01:40.592 Message: lib/ring: Defining dependency "ring" 00:01:40.592 Message: lib/rcu: Defining dependency "rcu" 00:01:40.592 Message: lib/mempool: Defining dependency "mempool" 00:01:40.592 Message: lib/mbuf: Defining dependency "mbuf" 00:01:40.592 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:40.592 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:40.592 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:40.592 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:40.592 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:40.592 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:40.592 Compiler for C supports arguments -mpclmul: YES 00:01:40.592 Compiler for C supports arguments -maes: YES 00:01:40.592 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:40.592 Compiler for C supports arguments -mavx512bw: YES 00:01:40.592 Compiler for C supports arguments -mavx512dq: YES 00:01:40.592 Compiler for C supports arguments -mavx512vl: YES 00:01:40.592 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:40.592 Compiler for C supports arguments -mavx2: YES 00:01:40.592 Compiler for C supports arguments -mavx: YES 00:01:40.592 Message: lib/net: Defining dependency "net" 00:01:40.592 Message: lib/meter: Defining dependency "meter" 00:01:40.592 Message: lib/ethdev: Defining dependency "ethdev" 00:01:40.592 Message: lib/pci: Defining dependency "pci" 00:01:40.592 Message: lib/cmdline: Defining dependency "cmdline" 00:01:40.592 Message: lib/hash: Defining dependency "hash" 00:01:40.592 Message: lib/timer: Defining dependency "timer" 00:01:40.592 Message: lib/compressdev: Defining dependency "compressdev" 00:01:40.592 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:40.592 Message: lib/dmadev: Defining dependency "dmadev" 00:01:40.592 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:40.592 Message: lib/power: Defining dependency "power" 00:01:40.592 Message: lib/reorder: Defining dependency "reorder" 00:01:40.592 Message: lib/security: Defining dependency "security" 00:01:40.592 Has header "linux/userfaultfd.h" : YES 00:01:40.592 Has header "linux/vduse.h" : YES 00:01:40.592 Message: lib/vhost: Defining dependency "vhost" 00:01:40.592 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:40.592 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:40.592 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:40.592 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:40.592 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:40.592 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:40.592 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:40.592 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:40.592 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:40.592 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:40.592 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:40.592 Configuring doxy-api-html.conf using configuration 00:01:40.592 Configuring doxy-api-man.conf using configuration 00:01:40.592 Program mandb found: YES (/usr/bin/mandb) 00:01:40.592 Program sphinx-build found: NO 00:01:40.592 Configuring rte_build_config.h using configuration 00:01:40.592 Message: 00:01:40.592 ================= 00:01:40.592 Applications Enabled 00:01:40.592 ================= 00:01:40.592 00:01:40.592 apps: 00:01:40.592 00:01:40.592 00:01:40.592 Message: 00:01:40.592 ================= 00:01:40.592 Libraries Enabled 00:01:40.592 ================= 00:01:40.592 00:01:40.592 libs: 00:01:40.592 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:40.592 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:40.592 cryptodev, dmadev, power, reorder, security, vhost, 00:01:40.592 00:01:40.592 Message: 00:01:40.592 =============== 00:01:40.592 Drivers Enabled 00:01:40.592 =============== 00:01:40.592 00:01:40.592 common: 00:01:40.592 00:01:40.592 bus: 00:01:40.592 pci, vdev, 00:01:40.592 mempool: 00:01:40.592 ring, 00:01:40.592 dma: 00:01:40.592 00:01:40.592 net: 00:01:40.592 00:01:40.592 crypto: 00:01:40.592 00:01:40.592 compress: 00:01:40.592 00:01:40.592 vdpa: 00:01:40.592 00:01:40.592 00:01:40.592 Message: 00:01:40.592 ================= 00:01:40.592 Content Skipped 00:01:40.592 ================= 00:01:40.592 00:01:40.592 apps: 00:01:40.592 dumpcap: explicitly disabled via build config 00:01:40.592 graph: explicitly disabled via build config 00:01:40.593 pdump: explicitly disabled via build config 00:01:40.593 proc-info: explicitly disabled via build config 00:01:40.593 test-acl: explicitly disabled via build config 00:01:40.593 test-bbdev: explicitly disabled via build config 00:01:40.593 test-cmdline: explicitly disabled via build config 00:01:40.593 test-compress-perf: explicitly disabled via build config 00:01:40.593 test-crypto-perf: explicitly disabled via build config 00:01:40.593 test-dma-perf: explicitly disabled via build config 00:01:40.593 test-eventdev: explicitly disabled via build config 00:01:40.593 test-fib: explicitly disabled via build config 00:01:40.593 test-flow-perf: explicitly disabled via build config 00:01:40.593 test-gpudev: explicitly disabled via build config 00:01:40.593 test-mldev: explicitly disabled via build config 00:01:40.593 test-pipeline: explicitly disabled via build config 00:01:40.593 test-pmd: explicitly disabled via build config 00:01:40.593 test-regex: explicitly disabled via build config 00:01:40.593 test-sad: explicitly disabled via build config 00:01:40.593 test-security-perf: explicitly disabled via build config 00:01:40.593 00:01:40.593 libs: 00:01:40.593 argparse: explicitly disabled via build config 00:01:40.593 metrics: explicitly disabled via build config 00:01:40.593 acl: explicitly disabled via build config 00:01:40.593 bbdev: explicitly disabled via build config 00:01:40.593 bitratestats: explicitly disabled via build config 00:01:40.593 bpf: explicitly disabled via build config 00:01:40.593 cfgfile: explicitly disabled via build config 00:01:40.593 distributor: explicitly disabled via build config 00:01:40.593 efd: explicitly disabled via build config 00:01:40.593 eventdev: explicitly disabled via build config 00:01:40.593 dispatcher: explicitly disabled via build config 00:01:40.593 gpudev: explicitly disabled via build config 00:01:40.593 gro: explicitly disabled via build config 00:01:40.593 gso: explicitly disabled via build config 00:01:40.593 ip_frag: explicitly disabled via build config 00:01:40.593 jobstats: explicitly disabled via build config 00:01:40.593 latencystats: explicitly disabled via build config 00:01:40.593 lpm: explicitly disabled via build config 00:01:40.593 member: explicitly disabled via build config 00:01:40.593 pcapng: explicitly disabled via build config 00:01:40.593 rawdev: explicitly disabled via build config 00:01:40.593 regexdev: explicitly disabled via build config 00:01:40.593 mldev: explicitly disabled via build config 00:01:40.593 rib: explicitly disabled via build config 00:01:40.593 sched: explicitly disabled via build config 00:01:40.593 stack: explicitly disabled via build config 00:01:40.593 ipsec: explicitly disabled via build config 00:01:40.593 pdcp: explicitly disabled via build config 00:01:40.593 fib: explicitly disabled via build config 00:01:40.593 port: explicitly disabled via build config 00:01:40.593 pdump: explicitly disabled via build config 00:01:40.593 table: explicitly disabled via build config 00:01:40.593 pipeline: explicitly disabled via build config 00:01:40.593 graph: explicitly disabled via build config 00:01:40.593 node: explicitly disabled via build config 00:01:40.593 00:01:40.593 drivers: 00:01:40.593 common/cpt: not in enabled drivers build config 00:01:40.593 common/dpaax: not in enabled drivers build config 00:01:40.593 common/iavf: not in enabled drivers build config 00:01:40.593 common/idpf: not in enabled drivers build config 00:01:40.593 common/ionic: not in enabled drivers build config 00:01:40.593 common/mvep: not in enabled drivers build config 00:01:40.593 common/octeontx: not in enabled drivers build config 00:01:40.593 bus/auxiliary: not in enabled drivers build config 00:01:40.593 bus/cdx: not in enabled drivers build config 00:01:40.593 bus/dpaa: not in enabled drivers build config 00:01:40.593 bus/fslmc: not in enabled drivers build config 00:01:40.593 bus/ifpga: not in enabled drivers build config 00:01:40.593 bus/platform: not in enabled drivers build config 00:01:40.593 bus/uacce: not in enabled drivers build config 00:01:40.593 bus/vmbus: not in enabled drivers build config 00:01:40.593 common/cnxk: not in enabled drivers build config 00:01:40.593 common/mlx5: not in enabled drivers build config 00:01:40.593 common/nfp: not in enabled drivers build config 00:01:40.593 common/nitrox: not in enabled drivers build config 00:01:40.593 common/qat: not in enabled drivers build config 00:01:40.593 common/sfc_efx: not in enabled drivers build config 00:01:40.593 mempool/bucket: not in enabled drivers build config 00:01:40.593 mempool/cnxk: not in enabled drivers build config 00:01:40.593 mempool/dpaa: not in enabled drivers build config 00:01:40.593 mempool/dpaa2: not in enabled drivers build config 00:01:40.593 mempool/octeontx: not in enabled drivers build config 00:01:40.593 mempool/stack: not in enabled drivers build config 00:01:40.593 dma/cnxk: not in enabled drivers build config 00:01:40.593 dma/dpaa: not in enabled drivers build config 00:01:40.593 dma/dpaa2: not in enabled drivers build config 00:01:40.593 dma/hisilicon: not in enabled drivers build config 00:01:40.593 dma/idxd: not in enabled drivers build config 00:01:40.593 dma/ioat: not in enabled drivers build config 00:01:40.593 dma/skeleton: not in enabled drivers build config 00:01:40.593 net/af_packet: not in enabled drivers build config 00:01:40.593 net/af_xdp: not in enabled drivers build config 00:01:40.593 net/ark: not in enabled drivers build config 00:01:40.593 net/atlantic: not in enabled drivers build config 00:01:40.593 net/avp: not in enabled drivers build config 00:01:40.593 net/axgbe: not in enabled drivers build config 00:01:40.593 net/bnx2x: not in enabled drivers build config 00:01:40.593 net/bnxt: not in enabled drivers build config 00:01:40.593 net/bonding: not in enabled drivers build config 00:01:40.593 net/cnxk: not in enabled drivers build config 00:01:40.593 net/cpfl: not in enabled drivers build config 00:01:40.593 net/cxgbe: not in enabled drivers build config 00:01:40.593 net/dpaa: not in enabled drivers build config 00:01:40.593 net/dpaa2: not in enabled drivers build config 00:01:40.593 net/e1000: not in enabled drivers build config 00:01:40.593 net/ena: not in enabled drivers build config 00:01:40.593 net/enetc: not in enabled drivers build config 00:01:40.593 net/enetfec: not in enabled drivers build config 00:01:40.593 net/enic: not in enabled drivers build config 00:01:40.593 net/failsafe: not in enabled drivers build config 00:01:40.593 net/fm10k: not in enabled drivers build config 00:01:40.593 net/gve: not in enabled drivers build config 00:01:40.593 net/hinic: not in enabled drivers build config 00:01:40.593 net/hns3: not in enabled drivers build config 00:01:40.593 net/i40e: not in enabled drivers build config 00:01:40.593 net/iavf: not in enabled drivers build config 00:01:40.593 net/ice: not in enabled drivers build config 00:01:40.593 net/idpf: not in enabled drivers build config 00:01:40.593 net/igc: not in enabled drivers build config 00:01:40.593 net/ionic: not in enabled drivers build config 00:01:40.593 net/ipn3ke: not in enabled drivers build config 00:01:40.593 net/ixgbe: not in enabled drivers build config 00:01:40.593 net/mana: not in enabled drivers build config 00:01:40.593 net/memif: not in enabled drivers build config 00:01:40.593 net/mlx4: not in enabled drivers build config 00:01:40.593 net/mlx5: not in enabled drivers build config 00:01:40.593 net/mvneta: not in enabled drivers build config 00:01:40.593 net/mvpp2: not in enabled drivers build config 00:01:40.593 net/netvsc: not in enabled drivers build config 00:01:40.593 net/nfb: not in enabled drivers build config 00:01:40.593 net/nfp: not in enabled drivers build config 00:01:40.593 net/ngbe: not in enabled drivers build config 00:01:40.593 net/null: not in enabled drivers build config 00:01:40.593 net/octeontx: not in enabled drivers build config 00:01:40.593 net/octeon_ep: not in enabled drivers build config 00:01:40.593 net/pcap: not in enabled drivers build config 00:01:40.593 net/pfe: not in enabled drivers build config 00:01:40.593 net/qede: not in enabled drivers build config 00:01:40.593 net/ring: not in enabled drivers build config 00:01:40.593 net/sfc: not in enabled drivers build config 00:01:40.593 net/softnic: not in enabled drivers build config 00:01:40.593 net/tap: not in enabled drivers build config 00:01:40.593 net/thunderx: not in enabled drivers build config 00:01:40.593 net/txgbe: not in enabled drivers build config 00:01:40.593 net/vdev_netvsc: not in enabled drivers build config 00:01:40.593 net/vhost: not in enabled drivers build config 00:01:40.593 net/virtio: not in enabled drivers build config 00:01:40.593 net/vmxnet3: not in enabled drivers build config 00:01:40.593 raw/*: missing internal dependency, "rawdev" 00:01:40.593 crypto/armv8: not in enabled drivers build config 00:01:40.593 crypto/bcmfs: not in enabled drivers build config 00:01:40.593 crypto/caam_jr: not in enabled drivers build config 00:01:40.593 crypto/ccp: not in enabled drivers build config 00:01:40.593 crypto/cnxk: not in enabled drivers build config 00:01:40.593 crypto/dpaa_sec: not in enabled drivers build config 00:01:40.593 crypto/dpaa2_sec: not in enabled drivers build config 00:01:40.593 crypto/ipsec_mb: not in enabled drivers build config 00:01:40.593 crypto/mlx5: not in enabled drivers build config 00:01:40.593 crypto/mvsam: not in enabled drivers build config 00:01:40.593 crypto/nitrox: not in enabled drivers build config 00:01:40.593 crypto/null: not in enabled drivers build config 00:01:40.593 crypto/octeontx: not in enabled drivers build config 00:01:40.593 crypto/openssl: not in enabled drivers build config 00:01:40.593 crypto/scheduler: not in enabled drivers build config 00:01:40.593 crypto/uadk: not in enabled drivers build config 00:01:40.593 crypto/virtio: not in enabled drivers build config 00:01:40.593 compress/isal: not in enabled drivers build config 00:01:40.593 compress/mlx5: not in enabled drivers build config 00:01:40.593 compress/nitrox: not in enabled drivers build config 00:01:40.593 compress/octeontx: not in enabled drivers build config 00:01:40.593 compress/zlib: not in enabled drivers build config 00:01:40.593 regex/*: missing internal dependency, "regexdev" 00:01:40.593 ml/*: missing internal dependency, "mldev" 00:01:40.593 vdpa/ifc: not in enabled drivers build config 00:01:40.593 vdpa/mlx5: not in enabled drivers build config 00:01:40.593 vdpa/nfp: not in enabled drivers build config 00:01:40.593 vdpa/sfc: not in enabled drivers build config 00:01:40.593 event/*: missing internal dependency, "eventdev" 00:01:40.593 baseband/*: missing internal dependency, "bbdev" 00:01:40.593 gpu/*: missing internal dependency, "gpudev" 00:01:40.593 00:01:40.593 00:01:40.593 Build targets in project: 84 00:01:40.593 00:01:40.593 DPDK 24.03.0 00:01:40.593 00:01:40.593 User defined options 00:01:40.593 buildtype : debug 00:01:40.593 default_library : shared 00:01:40.593 libdir : lib 00:01:40.594 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:40.594 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:40.594 c_link_args : 00:01:40.594 cpu_instruction_set: native 00:01:40.594 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:40.594 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:40.594 enable_docs : false 00:01:40.594 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:40.594 enable_kmods : false 00:01:40.594 max_lcores : 128 00:01:40.594 tests : false 00:01:40.594 00:01:40.594 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:40.594 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:40.594 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:40.594 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:40.594 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:40.594 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:40.594 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:40.594 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:40.594 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:40.594 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:40.594 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:40.594 [10/267] Linking static target lib/librte_kvargs.a 00:01:40.594 [11/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:40.594 [12/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:40.594 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:40.594 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:40.594 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:40.594 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:40.594 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:40.594 [18/267] Linking static target lib/librte_log.a 00:01:40.594 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:40.594 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:40.594 [21/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:40.594 [22/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:40.594 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:40.594 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:40.594 [25/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:40.594 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:40.594 [27/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:40.594 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:40.594 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:40.594 [30/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:40.594 [31/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:40.594 [32/267] Linking static target lib/librte_pci.a 00:01:40.594 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:40.853 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:40.853 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:40.853 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:40.853 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:40.853 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:40.853 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:40.853 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:40.853 [41/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:40.853 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:40.853 [43/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.853 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:40.853 [45/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:40.853 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:41.111 [47/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.111 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:41.111 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:41.111 [50/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:41.111 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:41.111 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:41.111 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:41.111 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:41.111 [55/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:41.111 [56/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:41.111 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:41.111 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:41.111 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:41.111 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:41.111 [61/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:41.111 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:41.111 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:41.111 [64/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:41.111 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:41.111 [66/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:41.111 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:41.111 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:41.111 [69/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:41.111 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:41.111 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:41.111 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:41.111 [73/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:41.111 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:41.111 [75/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:41.111 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:41.111 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:41.111 [78/267] Linking static target lib/librte_telemetry.a 00:01:41.111 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:41.111 [80/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:41.111 [81/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:41.111 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:41.111 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:41.111 [84/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:41.111 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:41.111 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:41.111 [87/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:41.111 [88/267] Linking static target lib/librte_timer.a 00:01:41.112 [89/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:41.112 [90/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:41.112 [91/267] Linking static target lib/librte_dmadev.a 00:01:41.112 [92/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:41.112 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:41.112 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:41.112 [95/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:41.112 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:41.112 [97/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:41.112 [98/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:41.112 [99/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:41.112 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:41.112 [101/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:41.112 [102/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:41.112 [103/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:41.112 [104/267] Linking static target lib/librte_meter.a 00:01:41.112 [105/267] Linking static target lib/librte_ring.a 00:01:41.112 [106/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:41.112 [107/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:41.112 [108/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:41.112 [109/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:41.112 [110/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:41.112 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:41.112 [112/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:41.112 [113/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:41.112 [114/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:41.112 [115/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:41.112 [116/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:41.112 [117/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:41.112 [118/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:41.112 [119/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:41.112 [120/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:41.112 [121/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:41.112 [122/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:41.112 [123/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:41.112 [124/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:41.112 [125/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:41.112 [126/267] Linking static target lib/librte_mempool.a 00:01:41.112 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:41.112 [128/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:41.112 [129/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:41.112 [130/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:41.112 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:41.112 [132/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:41.112 [133/267] Linking static target lib/librte_compressdev.a 00:01:41.112 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:41.112 [135/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:41.112 [136/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:41.112 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:41.112 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:41.112 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:41.112 [140/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:41.112 [141/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:41.112 [142/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:41.112 [143/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.112 [144/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:41.112 [145/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:41.112 [146/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:41.112 [147/267] Linking static target lib/librte_cmdline.a 00:01:41.112 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:41.112 [149/267] Linking static target lib/librte_net.a 00:01:41.112 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:41.112 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:41.112 [152/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:41.112 [153/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:41.112 [154/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:41.112 [155/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:41.112 [156/267] Linking static target lib/librte_rcu.a 00:01:41.371 [157/267] Linking static target lib/librte_reorder.a 00:01:41.371 [158/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:41.371 [159/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:41.371 [160/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:41.371 [161/267] Linking target lib/librte_log.so.24.1 00:01:41.371 [162/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:41.371 [163/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:41.371 [164/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:41.371 [165/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:41.371 [166/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:41.371 [167/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:41.371 [168/267] Linking static target lib/librte_power.a 00:01:41.371 [169/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:41.371 [170/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:41.371 [171/267] Linking static target lib/librte_eal.a 00:01:41.371 [172/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:41.371 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:41.371 [174/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:41.371 [175/267] Linking static target drivers/librte_bus_vdev.a 00:01:41.371 [176/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:41.371 [177/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:41.371 [178/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:41.371 [179/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:41.371 [180/267] Linking static target lib/librte_security.a 00:01:41.371 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:41.371 [182/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.371 [183/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:41.371 [184/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:41.371 [185/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:41.371 [186/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:41.371 [187/267] Linking static target lib/librte_mbuf.a 00:01:41.371 [188/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:41.371 [189/267] Linking target lib/librte_kvargs.so.24.1 00:01:41.371 [190/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:41.371 [191/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.371 [192/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:41.371 [193/267] Linking static target lib/librte_hash.a 00:01:41.371 [194/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.371 [195/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.371 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:41.630 [197/267] Linking static target drivers/librte_bus_pci.a 00:01:41.630 [198/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:41.630 [199/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.630 [200/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.630 [201/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:41.630 [202/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:41.630 [203/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:41.630 [204/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:41.630 [205/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.630 [206/267] Linking static target lib/librte_cryptodev.a 00:01:41.630 [207/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:41.630 [208/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:41.630 [209/267] Linking static target drivers/librte_mempool_ring.a 00:01:41.630 [210/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.630 [211/267] Linking target lib/librte_telemetry.so.24.1 00:01:41.631 [212/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.631 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.888 [214/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.888 [215/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:41.889 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.889 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.148 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:42.148 [219/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:42.148 [220/267] Linking static target lib/librte_ethdev.a 00:01:42.148 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.407 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.407 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.407 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.407 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.667 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.235 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:43.235 [228/267] Linking static target lib/librte_vhost.a 00:01:43.802 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.188 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.774 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.159 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.159 [233/267] Linking target lib/librte_eal.so.24.1 00:01:53.159 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:53.159 [235/267] Linking target lib/librte_ring.so.24.1 00:01:53.159 [236/267] Linking target lib/librte_meter.so.24.1 00:01:53.159 [237/267] Linking target lib/librte_pci.so.24.1 00:01:53.159 [238/267] Linking target lib/librte_timer.so.24.1 00:01:53.159 [239/267] Linking target lib/librte_dmadev.so.24.1 00:01:53.159 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:53.159 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:53.420 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:53.420 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:53.420 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:53.420 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:53.420 [246/267] Linking target lib/librte_rcu.so.24.1 00:01:53.420 [247/267] Linking target lib/librte_mempool.so.24.1 00:01:53.420 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:53.420 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:53.420 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:53.420 [251/267] Linking target lib/librte_mbuf.so.24.1 00:01:53.420 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:53.681 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:53.681 [254/267] Linking target lib/librte_net.so.24.1 00:01:53.681 [255/267] Linking target lib/librte_compressdev.so.24.1 00:01:53.681 [256/267] Linking target lib/librte_reorder.so.24.1 00:01:53.681 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:53.941 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:53.941 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:53.941 [260/267] Linking target lib/librte_cmdline.so.24.1 00:01:53.941 [261/267] Linking target lib/librte_hash.so.24.1 00:01:53.941 [262/267] Linking target lib/librte_security.so.24.1 00:01:53.941 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:53.941 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:53.941 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:54.201 [266/267] Linking target lib/librte_power.so.24.1 00:01:54.201 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:54.201 INFO: autodetecting backend as ninja 00:01:54.201 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:57.503 CC lib/log/log.o 00:01:57.503 CC lib/log/log_flags.o 00:01:57.503 CC lib/log/log_deprecated.o 00:01:57.503 CC lib/ut_mock/mock.o 00:01:57.503 CC lib/ut/ut.o 00:01:57.503 LIB libspdk_ut_mock.a 00:01:57.503 LIB libspdk_log.a 00:01:57.503 LIB libspdk_ut.a 00:01:57.503 SO libspdk_ut_mock.so.6.0 00:01:57.503 SO libspdk_log.so.7.1 00:01:57.503 SO libspdk_ut.so.2.0 00:01:57.503 SYMLINK libspdk_ut_mock.so 00:01:57.503 SYMLINK libspdk_log.so 00:01:57.503 SYMLINK libspdk_ut.so 00:01:58.075 CC lib/util/base64.o 00:01:58.075 CC lib/util/bit_array.o 00:01:58.075 CC lib/util/cpuset.o 00:01:58.075 CC lib/util/crc16.o 00:01:58.075 CC lib/util/crc32.o 00:01:58.075 CC lib/util/crc32c.o 00:01:58.075 CC lib/ioat/ioat.o 00:01:58.075 CC lib/util/crc32_ieee.o 00:01:58.075 CC lib/util/crc64.o 00:01:58.075 CC lib/util/dif.o 00:01:58.075 CC lib/util/fd.o 00:01:58.075 CC lib/util/fd_group.o 00:01:58.075 CC lib/util/file.o 00:01:58.075 CC lib/dma/dma.o 00:01:58.075 CC lib/util/hexlify.o 00:01:58.075 CC lib/util/iov.o 00:01:58.075 CC lib/util/math.o 00:01:58.075 CC lib/util/net.o 00:01:58.075 CC lib/util/pipe.o 00:01:58.075 CXX lib/trace_parser/trace.o 00:01:58.075 CC lib/util/strerror_tls.o 00:01:58.075 CC lib/util/string.o 00:01:58.075 CC lib/util/uuid.o 00:01:58.075 CC lib/util/xor.o 00:01:58.075 CC lib/util/zipf.o 00:01:58.075 CC lib/util/md5.o 00:01:58.075 CC lib/vfio_user/host/vfio_user_pci.o 00:01:58.075 CC lib/vfio_user/host/vfio_user.o 00:01:58.075 LIB libspdk_dma.a 00:01:58.335 SO libspdk_dma.so.5.0 00:01:58.335 LIB libspdk_ioat.a 00:01:58.335 SO libspdk_ioat.so.7.0 00:01:58.335 SYMLINK libspdk_dma.so 00:01:58.335 SYMLINK libspdk_ioat.so 00:01:58.335 LIB libspdk_vfio_user.a 00:01:58.335 SO libspdk_vfio_user.so.5.0 00:01:58.595 LIB libspdk_util.a 00:01:58.595 SYMLINK libspdk_vfio_user.so 00:01:58.595 SO libspdk_util.so.10.1 00:01:58.596 SYMLINK libspdk_util.so 00:01:58.855 LIB libspdk_trace_parser.a 00:01:58.855 SO libspdk_trace_parser.so.6.0 00:01:58.855 SYMLINK libspdk_trace_parser.so 00:01:59.116 CC lib/json/json_parse.o 00:01:59.116 CC lib/json/json_util.o 00:01:59.116 CC lib/json/json_write.o 00:01:59.116 CC lib/rdma_utils/rdma_utils.o 00:01:59.116 CC lib/conf/conf.o 00:01:59.116 CC lib/idxd/idxd.o 00:01:59.116 CC lib/env_dpdk/env.o 00:01:59.116 CC lib/vmd/vmd.o 00:01:59.116 CC lib/idxd/idxd_user.o 00:01:59.116 CC lib/env_dpdk/memory.o 00:01:59.116 CC lib/vmd/led.o 00:01:59.116 CC lib/idxd/idxd_kernel.o 00:01:59.116 CC lib/env_dpdk/pci.o 00:01:59.116 CC lib/env_dpdk/init.o 00:01:59.116 CC lib/env_dpdk/threads.o 00:01:59.116 CC lib/env_dpdk/pci_ioat.o 00:01:59.116 CC lib/env_dpdk/pci_virtio.o 00:01:59.116 CC lib/env_dpdk/pci_vmd.o 00:01:59.116 CC lib/env_dpdk/pci_idxd.o 00:01:59.116 CC lib/env_dpdk/pci_event.o 00:01:59.116 CC lib/env_dpdk/sigbus_handler.o 00:01:59.116 CC lib/env_dpdk/pci_dpdk.o 00:01:59.116 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:59.116 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:59.376 LIB libspdk_conf.a 00:01:59.376 SO libspdk_conf.so.6.0 00:01:59.376 LIB libspdk_rdma_utils.a 00:01:59.376 LIB libspdk_json.a 00:01:59.376 SO libspdk_rdma_utils.so.1.0 00:01:59.376 SYMLINK libspdk_conf.so 00:01:59.376 SO libspdk_json.so.6.0 00:01:59.376 SYMLINK libspdk_rdma_utils.so 00:01:59.376 SYMLINK libspdk_json.so 00:01:59.637 LIB libspdk_idxd.a 00:01:59.637 SO libspdk_idxd.so.12.1 00:01:59.637 LIB libspdk_vmd.a 00:01:59.637 SO libspdk_vmd.so.6.0 00:01:59.637 SYMLINK libspdk_idxd.so 00:01:59.898 SYMLINK libspdk_vmd.so 00:01:59.898 CC lib/rdma_provider/common.o 00:01:59.898 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:59.898 CC lib/jsonrpc/jsonrpc_server.o 00:01:59.898 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:59.898 CC lib/jsonrpc/jsonrpc_client.o 00:01:59.898 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:00.159 LIB libspdk_rdma_provider.a 00:02:00.159 SO libspdk_rdma_provider.so.7.0 00:02:00.159 LIB libspdk_jsonrpc.a 00:02:00.159 SO libspdk_jsonrpc.so.6.0 00:02:00.159 SYMLINK libspdk_rdma_provider.so 00:02:00.159 SYMLINK libspdk_jsonrpc.so 00:02:00.419 LIB libspdk_env_dpdk.a 00:02:00.419 SO libspdk_env_dpdk.so.15.1 00:02:00.419 SYMLINK libspdk_env_dpdk.so 00:02:00.679 CC lib/rpc/rpc.o 00:02:00.939 LIB libspdk_rpc.a 00:02:00.939 SO libspdk_rpc.so.6.0 00:02:00.939 SYMLINK libspdk_rpc.so 00:02:01.200 CC lib/notify/notify.o 00:02:01.200 CC lib/trace/trace.o 00:02:01.200 CC lib/notify/notify_rpc.o 00:02:01.200 CC lib/trace/trace_flags.o 00:02:01.200 CC lib/trace/trace_rpc.o 00:02:01.200 CC lib/keyring/keyring.o 00:02:01.200 CC lib/keyring/keyring_rpc.o 00:02:01.460 LIB libspdk_notify.a 00:02:01.460 SO libspdk_notify.so.6.0 00:02:01.460 LIB libspdk_keyring.a 00:02:01.460 LIB libspdk_trace.a 00:02:01.460 SYMLINK libspdk_notify.so 00:02:01.720 SO libspdk_keyring.so.2.0 00:02:01.720 SO libspdk_trace.so.11.0 00:02:01.720 SYMLINK libspdk_keyring.so 00:02:01.720 SYMLINK libspdk_trace.so 00:02:01.980 CC lib/sock/sock.o 00:02:01.980 CC lib/thread/thread.o 00:02:01.980 CC lib/sock/sock_rpc.o 00:02:01.980 CC lib/thread/iobuf.o 00:02:02.552 LIB libspdk_sock.a 00:02:02.552 SO libspdk_sock.so.10.0 00:02:02.552 SYMLINK libspdk_sock.so 00:02:02.813 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:02.813 CC lib/nvme/nvme_ctrlr.o 00:02:02.813 CC lib/nvme/nvme_fabric.o 00:02:02.813 CC lib/nvme/nvme_ns_cmd.o 00:02:02.813 CC lib/nvme/nvme_ns.o 00:02:02.813 CC lib/nvme/nvme_pcie_common.o 00:02:02.813 CC lib/nvme/nvme_pcie.o 00:02:02.813 CC lib/nvme/nvme_qpair.o 00:02:02.813 CC lib/nvme/nvme.o 00:02:02.813 CC lib/nvme/nvme_quirks.o 00:02:02.813 CC lib/nvme/nvme_transport.o 00:02:02.813 CC lib/nvme/nvme_discovery.o 00:02:02.813 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:02.813 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:02.813 CC lib/nvme/nvme_tcp.o 00:02:02.813 CC lib/nvme/nvme_opal.o 00:02:02.813 CC lib/nvme/nvme_io_msg.o 00:02:02.813 CC lib/nvme/nvme_poll_group.o 00:02:02.813 CC lib/nvme/nvme_zns.o 00:02:02.813 CC lib/nvme/nvme_stubs.o 00:02:02.813 CC lib/nvme/nvme_auth.o 00:02:02.813 CC lib/nvme/nvme_cuse.o 00:02:02.813 CC lib/nvme/nvme_vfio_user.o 00:02:02.813 CC lib/nvme/nvme_rdma.o 00:02:03.383 LIB libspdk_thread.a 00:02:03.383 SO libspdk_thread.so.11.0 00:02:03.383 SYMLINK libspdk_thread.so 00:02:03.955 CC lib/vfu_tgt/tgt_endpoint.o 00:02:03.955 CC lib/vfu_tgt/tgt_rpc.o 00:02:03.955 CC lib/accel/accel.o 00:02:03.955 CC lib/init/json_config.o 00:02:03.955 CC lib/accel/accel_rpc.o 00:02:03.955 CC lib/init/subsystem_rpc.o 00:02:03.955 CC lib/init/subsystem.o 00:02:03.955 CC lib/accel/accel_sw.o 00:02:03.955 CC lib/init/rpc.o 00:02:03.955 CC lib/blob/blobstore.o 00:02:03.955 CC lib/fsdev/fsdev.o 00:02:03.955 CC lib/blob/request.o 00:02:03.955 CC lib/fsdev/fsdev_io.o 00:02:03.955 CC lib/blob/zeroes.o 00:02:03.955 CC lib/fsdev/fsdev_rpc.o 00:02:03.955 CC lib/blob/blob_bs_dev.o 00:02:03.955 CC lib/virtio/virtio.o 00:02:03.955 CC lib/virtio/virtio_vhost_user.o 00:02:03.955 CC lib/virtio/virtio_vfio_user.o 00:02:03.955 CC lib/virtio/virtio_pci.o 00:02:04.215 LIB libspdk_init.a 00:02:04.215 SO libspdk_init.so.6.0 00:02:04.215 LIB libspdk_vfu_tgt.a 00:02:04.215 LIB libspdk_virtio.a 00:02:04.215 SO libspdk_vfu_tgt.so.3.0 00:02:04.215 SYMLINK libspdk_init.so 00:02:04.215 SO libspdk_virtio.so.7.0 00:02:04.215 SYMLINK libspdk_vfu_tgt.so 00:02:04.215 SYMLINK libspdk_virtio.so 00:02:04.475 LIB libspdk_fsdev.a 00:02:04.475 SO libspdk_fsdev.so.2.0 00:02:04.475 SYMLINK libspdk_fsdev.so 00:02:04.475 CC lib/event/app.o 00:02:04.475 CC lib/event/reactor.o 00:02:04.475 CC lib/event/log_rpc.o 00:02:04.736 CC lib/event/app_rpc.o 00:02:04.736 CC lib/event/scheduler_static.o 00:02:04.736 LIB libspdk_accel.a 00:02:04.997 SO libspdk_accel.so.16.0 00:02:04.997 LIB libspdk_nvme.a 00:02:04.997 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:04.997 SYMLINK libspdk_accel.so 00:02:04.997 LIB libspdk_event.a 00:02:04.997 SO libspdk_nvme.so.15.0 00:02:04.997 SO libspdk_event.so.14.0 00:02:05.258 SYMLINK libspdk_event.so 00:02:05.258 SYMLINK libspdk_nvme.so 00:02:05.258 CC lib/bdev/bdev.o 00:02:05.258 CC lib/bdev/bdev_rpc.o 00:02:05.258 CC lib/bdev/bdev_zone.o 00:02:05.258 CC lib/bdev/scsi_nvme.o 00:02:05.258 CC lib/bdev/part.o 00:02:05.519 LIB libspdk_fuse_dispatcher.a 00:02:05.519 SO libspdk_fuse_dispatcher.so.1.0 00:02:05.519 SYMLINK libspdk_fuse_dispatcher.so 00:02:06.463 LIB libspdk_blob.a 00:02:06.463 SO libspdk_blob.so.11.0 00:02:06.724 SYMLINK libspdk_blob.so 00:02:06.985 CC lib/lvol/lvol.o 00:02:06.985 CC lib/blobfs/blobfs.o 00:02:06.985 CC lib/blobfs/tree.o 00:02:07.929 LIB libspdk_bdev.a 00:02:07.929 SO libspdk_bdev.so.17.0 00:02:07.929 LIB libspdk_blobfs.a 00:02:07.929 SO libspdk_blobfs.so.10.0 00:02:07.929 SYMLINK libspdk_bdev.so 00:02:07.929 LIB libspdk_lvol.a 00:02:07.929 SYMLINK libspdk_blobfs.so 00:02:07.929 SO libspdk_lvol.so.10.0 00:02:07.929 SYMLINK libspdk_lvol.so 00:02:08.191 CC lib/nvmf/ctrlr.o 00:02:08.191 CC lib/nvmf/ctrlr_discovery.o 00:02:08.191 CC lib/nvmf/ctrlr_bdev.o 00:02:08.191 CC lib/ublk/ublk.o 00:02:08.191 CC lib/nvmf/subsystem.o 00:02:08.191 CC lib/scsi/dev.o 00:02:08.191 CC lib/nvmf/nvmf.o 00:02:08.191 CC lib/nbd/nbd.o 00:02:08.191 CC lib/ublk/ublk_rpc.o 00:02:08.191 CC lib/ftl/ftl_core.o 00:02:08.191 CC lib/scsi/lun.o 00:02:08.191 CC lib/nvmf/nvmf_rpc.o 00:02:08.191 CC lib/nbd/nbd_rpc.o 00:02:08.191 CC lib/ftl/ftl_init.o 00:02:08.191 CC lib/nvmf/transport.o 00:02:08.191 CC lib/scsi/port.o 00:02:08.191 CC lib/ftl/ftl_layout.o 00:02:08.191 CC lib/nvmf/tcp.o 00:02:08.191 CC lib/scsi/scsi.o 00:02:08.191 CC lib/nvmf/stubs.o 00:02:08.191 CC lib/ftl/ftl_debug.o 00:02:08.191 CC lib/scsi/scsi_bdev.o 00:02:08.191 CC lib/ftl/ftl_io.o 00:02:08.191 CC lib/nvmf/mdns_server.o 00:02:08.191 CC lib/scsi/scsi_pr.o 00:02:08.191 CC lib/nvmf/vfio_user.o 00:02:08.191 CC lib/ftl/ftl_sb.o 00:02:08.191 CC lib/scsi/scsi_rpc.o 00:02:08.191 CC lib/ftl/ftl_l2p.o 00:02:08.191 CC lib/nvmf/rdma.o 00:02:08.191 CC lib/scsi/task.o 00:02:08.191 CC lib/nvmf/auth.o 00:02:08.191 CC lib/ftl/ftl_l2p_flat.o 00:02:08.191 CC lib/ftl/ftl_nv_cache.o 00:02:08.191 CC lib/ftl/ftl_band.o 00:02:08.191 CC lib/ftl/ftl_band_ops.o 00:02:08.191 CC lib/ftl/ftl_writer.o 00:02:08.191 CC lib/ftl/ftl_rq.o 00:02:08.191 CC lib/ftl/ftl_reloc.o 00:02:08.191 CC lib/ftl/ftl_l2p_cache.o 00:02:08.191 CC lib/ftl/ftl_p2l.o 00:02:08.191 CC lib/ftl/ftl_p2l_log.o 00:02:08.191 CC lib/ftl/mngt/ftl_mngt.o 00:02:08.191 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:08.191 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:08.191 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:08.191 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:08.191 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:08.191 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:08.191 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:08.191 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:08.191 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:08.191 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:08.191 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:08.191 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:08.191 CC lib/ftl/utils/ftl_conf.o 00:02:08.191 CC lib/ftl/utils/ftl_md.o 00:02:08.191 CC lib/ftl/utils/ftl_mempool.o 00:02:08.191 CC lib/ftl/utils/ftl_bitmap.o 00:02:08.191 CC lib/ftl/utils/ftl_property.o 00:02:08.191 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:08.191 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:08.191 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:08.191 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:08.191 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:08.191 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:08.191 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:08.191 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:08.191 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:08.191 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:08.191 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:08.191 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:08.191 CC lib/ftl/base/ftl_base_dev.o 00:02:08.191 CC lib/ftl/base/ftl_base_bdev.o 00:02:08.191 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:08.458 CC lib/ftl/ftl_trace.o 00:02:09.028 LIB libspdk_nbd.a 00:02:09.028 SO libspdk_nbd.so.7.0 00:02:09.028 LIB libspdk_scsi.a 00:02:09.028 SYMLINK libspdk_nbd.so 00:02:09.028 SO libspdk_scsi.so.9.0 00:02:09.288 LIB libspdk_ublk.a 00:02:09.288 SYMLINK libspdk_scsi.so 00:02:09.288 SO libspdk_ublk.so.3.0 00:02:09.288 SYMLINK libspdk_ublk.so 00:02:09.548 LIB libspdk_ftl.a 00:02:09.548 CC lib/iscsi/conn.o 00:02:09.548 CC lib/iscsi/init_grp.o 00:02:09.548 CC lib/iscsi/iscsi.o 00:02:09.548 CC lib/iscsi/param.o 00:02:09.548 CC lib/vhost/vhost.o 00:02:09.548 CC lib/iscsi/portal_grp.o 00:02:09.548 CC lib/iscsi/tgt_node.o 00:02:09.548 CC lib/vhost/vhost_rpc.o 00:02:09.548 CC lib/iscsi/iscsi_subsystem.o 00:02:09.548 CC lib/vhost/vhost_scsi.o 00:02:09.548 CC lib/iscsi/iscsi_rpc.o 00:02:09.548 CC lib/vhost/vhost_blk.o 00:02:09.548 CC lib/iscsi/task.o 00:02:09.548 CC lib/vhost/rte_vhost_user.o 00:02:09.810 SO libspdk_ftl.so.9.0 00:02:10.071 SYMLINK libspdk_ftl.so 00:02:10.332 LIB libspdk_nvmf.a 00:02:10.592 SO libspdk_nvmf.so.20.0 00:02:10.592 LIB libspdk_vhost.a 00:02:10.592 SO libspdk_vhost.so.8.0 00:02:10.592 SYMLINK libspdk_nvmf.so 00:02:10.853 SYMLINK libspdk_vhost.so 00:02:10.853 LIB libspdk_iscsi.a 00:02:10.853 SO libspdk_iscsi.so.8.0 00:02:11.114 SYMLINK libspdk_iscsi.so 00:02:11.687 CC module/env_dpdk/env_dpdk_rpc.o 00:02:11.687 CC module/vfu_device/vfu_virtio.o 00:02:11.687 CC module/vfu_device/vfu_virtio_blk.o 00:02:11.687 CC module/vfu_device/vfu_virtio_scsi.o 00:02:11.687 CC module/vfu_device/vfu_virtio_rpc.o 00:02:11.687 CC module/vfu_device/vfu_virtio_fs.o 00:02:11.687 LIB libspdk_env_dpdk_rpc.a 00:02:11.687 CC module/accel/iaa/accel_iaa.o 00:02:11.687 CC module/accel/error/accel_error.o 00:02:11.687 CC module/scheduler/gscheduler/gscheduler.o 00:02:11.687 CC module/accel/ioat/accel_ioat.o 00:02:11.687 CC module/accel/iaa/accel_iaa_rpc.o 00:02:11.687 CC module/accel/error/accel_error_rpc.o 00:02:11.687 CC module/accel/ioat/accel_ioat_rpc.o 00:02:11.687 CC module/accel/dsa/accel_dsa.o 00:02:11.947 CC module/accel/dsa/accel_dsa_rpc.o 00:02:11.947 CC module/sock/posix/posix.o 00:02:11.947 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:11.947 CC module/blob/bdev/blob_bdev.o 00:02:11.947 CC module/keyring/file/keyring.o 00:02:11.947 CC module/keyring/file/keyring_rpc.o 00:02:11.947 CC module/keyring/linux/keyring.o 00:02:11.947 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:11.947 CC module/fsdev/aio/fsdev_aio.o 00:02:11.947 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:11.947 CC module/keyring/linux/keyring_rpc.o 00:02:11.947 CC module/fsdev/aio/linux_aio_mgr.o 00:02:11.947 SO libspdk_env_dpdk_rpc.so.6.0 00:02:11.947 SYMLINK libspdk_env_dpdk_rpc.so 00:02:11.947 LIB libspdk_scheduler_gscheduler.a 00:02:11.947 LIB libspdk_keyring_file.a 00:02:11.947 LIB libspdk_keyring_linux.a 00:02:11.947 LIB libspdk_scheduler_dpdk_governor.a 00:02:11.947 LIB libspdk_accel_error.a 00:02:11.947 SO libspdk_scheduler_gscheduler.so.4.0 00:02:11.947 SO libspdk_keyring_file.so.2.0 00:02:11.947 LIB libspdk_accel_ioat.a 00:02:11.947 LIB libspdk_accel_iaa.a 00:02:12.208 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:12.208 LIB libspdk_scheduler_dynamic.a 00:02:12.208 SO libspdk_keyring_linux.so.1.0 00:02:12.208 SO libspdk_accel_error.so.2.0 00:02:12.208 SO libspdk_accel_iaa.so.3.0 00:02:12.208 SO libspdk_accel_ioat.so.6.0 00:02:12.208 SYMLINK libspdk_scheduler_gscheduler.so 00:02:12.208 SO libspdk_scheduler_dynamic.so.4.0 00:02:12.208 SYMLINK libspdk_keyring_file.so 00:02:12.208 LIB libspdk_blob_bdev.a 00:02:12.208 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:12.208 LIB libspdk_accel_dsa.a 00:02:12.208 SYMLINK libspdk_accel_error.so 00:02:12.208 SYMLINK libspdk_keyring_linux.so 00:02:12.208 SYMLINK libspdk_accel_iaa.so 00:02:12.208 SYMLINK libspdk_accel_ioat.so 00:02:12.208 SO libspdk_blob_bdev.so.11.0 00:02:12.208 SYMLINK libspdk_scheduler_dynamic.so 00:02:12.208 SO libspdk_accel_dsa.so.5.0 00:02:12.208 LIB libspdk_vfu_device.a 00:02:12.208 SYMLINK libspdk_blob_bdev.so 00:02:12.208 SYMLINK libspdk_accel_dsa.so 00:02:12.208 SO libspdk_vfu_device.so.3.0 00:02:12.470 SYMLINK libspdk_vfu_device.so 00:02:12.470 LIB libspdk_fsdev_aio.a 00:02:12.470 SO libspdk_fsdev_aio.so.1.0 00:02:12.470 LIB libspdk_sock_posix.a 00:02:12.731 SO libspdk_sock_posix.so.6.0 00:02:12.731 SYMLINK libspdk_fsdev_aio.so 00:02:12.731 SYMLINK libspdk_sock_posix.so 00:02:12.731 CC module/bdev/error/vbdev_error.o 00:02:12.731 CC module/bdev/error/vbdev_error_rpc.o 00:02:12.731 CC module/bdev/null/bdev_null.o 00:02:12.731 CC module/blobfs/bdev/blobfs_bdev.o 00:02:12.731 CC module/bdev/null/bdev_null_rpc.o 00:02:12.731 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:12.731 CC module/bdev/delay/vbdev_delay.o 00:02:12.731 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:12.731 CC module/bdev/gpt/gpt.o 00:02:12.731 CC module/bdev/gpt/vbdev_gpt.o 00:02:12.731 CC module/bdev/malloc/bdev_malloc.o 00:02:12.731 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:12.731 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:12.731 CC module/bdev/lvol/vbdev_lvol.o 00:02:12.731 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:12.731 CC module/bdev/split/vbdev_split.o 00:02:12.731 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:12.731 CC module/bdev/split/vbdev_split_rpc.o 00:02:12.731 CC module/bdev/passthru/vbdev_passthru.o 00:02:12.731 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:12.731 CC module/bdev/nvme/bdev_nvme.o 00:02:12.731 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:12.992 CC module/bdev/nvme/nvme_rpc.o 00:02:12.992 CC module/bdev/ftl/bdev_ftl.o 00:02:12.992 CC module/bdev/nvme/bdev_mdns_client.o 00:02:12.992 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:12.992 CC module/bdev/iscsi/bdev_iscsi.o 00:02:12.992 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:12.992 CC module/bdev/nvme/vbdev_opal.o 00:02:12.992 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:12.992 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:12.992 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:12.992 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:12.992 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:12.992 CC module/bdev/aio/bdev_aio.o 00:02:12.992 CC module/bdev/raid/bdev_raid.o 00:02:12.992 CC module/bdev/raid/bdev_raid_sb.o 00:02:12.992 CC module/bdev/aio/bdev_aio_rpc.o 00:02:12.992 CC module/bdev/raid/bdev_raid_rpc.o 00:02:12.992 CC module/bdev/raid/raid0.o 00:02:12.992 CC module/bdev/raid/raid1.o 00:02:12.992 CC module/bdev/raid/concat.o 00:02:13.253 LIB libspdk_blobfs_bdev.a 00:02:13.253 LIB libspdk_bdev_split.a 00:02:13.253 LIB libspdk_bdev_error.a 00:02:13.253 SO libspdk_blobfs_bdev.so.6.0 00:02:13.253 SO libspdk_bdev_error.so.6.0 00:02:13.253 SO libspdk_bdev_split.so.6.0 00:02:13.253 LIB libspdk_bdev_null.a 00:02:13.253 LIB libspdk_bdev_passthru.a 00:02:13.253 SO libspdk_bdev_null.so.6.0 00:02:13.253 LIB libspdk_bdev_gpt.a 00:02:13.253 LIB libspdk_bdev_ftl.a 00:02:13.253 SYMLINK libspdk_bdev_error.so 00:02:13.253 SO libspdk_bdev_passthru.so.6.0 00:02:13.253 SYMLINK libspdk_blobfs_bdev.so 00:02:13.253 LIB libspdk_bdev_malloc.a 00:02:13.253 SYMLINK libspdk_bdev_split.so 00:02:13.253 SO libspdk_bdev_gpt.so.6.0 00:02:13.253 SO libspdk_bdev_ftl.so.6.0 00:02:13.253 LIB libspdk_bdev_zone_block.a 00:02:13.253 LIB libspdk_bdev_aio.a 00:02:13.253 SYMLINK libspdk_bdev_null.so 00:02:13.253 LIB libspdk_bdev_iscsi.a 00:02:13.253 LIB libspdk_bdev_delay.a 00:02:13.253 SO libspdk_bdev_malloc.so.6.0 00:02:13.253 SYMLINK libspdk_bdev_passthru.so 00:02:13.514 SO libspdk_bdev_iscsi.so.6.0 00:02:13.514 SO libspdk_bdev_aio.so.6.0 00:02:13.514 SO libspdk_bdev_zone_block.so.6.0 00:02:13.514 SYMLINK libspdk_bdev_gpt.so 00:02:13.514 SO libspdk_bdev_delay.so.6.0 00:02:13.514 SYMLINK libspdk_bdev_ftl.so 00:02:13.514 SYMLINK libspdk_bdev_malloc.so 00:02:13.514 SYMLINK libspdk_bdev_aio.so 00:02:13.514 SYMLINK libspdk_bdev_delay.so 00:02:13.514 SYMLINK libspdk_bdev_iscsi.so 00:02:13.514 LIB libspdk_bdev_lvol.a 00:02:13.514 SYMLINK libspdk_bdev_zone_block.so 00:02:13.514 LIB libspdk_bdev_virtio.a 00:02:13.514 SO libspdk_bdev_lvol.so.6.0 00:02:13.514 SO libspdk_bdev_virtio.so.6.0 00:02:13.514 SYMLINK libspdk_bdev_lvol.so 00:02:13.514 SYMLINK libspdk_bdev_virtio.so 00:02:14.085 LIB libspdk_bdev_raid.a 00:02:14.085 SO libspdk_bdev_raid.so.6.0 00:02:14.085 SYMLINK libspdk_bdev_raid.so 00:02:15.127 LIB libspdk_bdev_nvme.a 00:02:15.387 SO libspdk_bdev_nvme.so.7.1 00:02:15.387 SYMLINK libspdk_bdev_nvme.so 00:02:16.329 CC module/event/subsystems/vmd/vmd.o 00:02:16.329 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:16.329 CC module/event/subsystems/iobuf/iobuf.o 00:02:16.329 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:16.329 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:16.329 CC module/event/subsystems/fsdev/fsdev.o 00:02:16.329 CC module/event/subsystems/sock/sock.o 00:02:16.329 CC module/event/subsystems/keyring/keyring.o 00:02:16.329 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:16.329 CC module/event/subsystems/scheduler/scheduler.o 00:02:16.329 LIB libspdk_event_vfu_tgt.a 00:02:16.329 LIB libspdk_event_vhost_blk.a 00:02:16.329 LIB libspdk_event_keyring.a 00:02:16.329 LIB libspdk_event_vmd.a 00:02:16.329 LIB libspdk_event_fsdev.a 00:02:16.329 LIB libspdk_event_iobuf.a 00:02:16.329 LIB libspdk_event_scheduler.a 00:02:16.329 LIB libspdk_event_sock.a 00:02:16.329 SO libspdk_event_vhost_blk.so.3.0 00:02:16.329 SO libspdk_event_vfu_tgt.so.3.0 00:02:16.329 SO libspdk_event_keyring.so.1.0 00:02:16.329 SO libspdk_event_vmd.so.6.0 00:02:16.329 SO libspdk_event_fsdev.so.1.0 00:02:16.329 SO libspdk_event_iobuf.so.3.0 00:02:16.329 SO libspdk_event_sock.so.5.0 00:02:16.329 SO libspdk_event_scheduler.so.4.0 00:02:16.329 SYMLINK libspdk_event_vhost_blk.so 00:02:16.329 SYMLINK libspdk_event_vfu_tgt.so 00:02:16.329 SYMLINK libspdk_event_keyring.so 00:02:16.329 SYMLINK libspdk_event_vmd.so 00:02:16.329 SYMLINK libspdk_event_fsdev.so 00:02:16.329 SYMLINK libspdk_event_scheduler.so 00:02:16.329 SYMLINK libspdk_event_iobuf.so 00:02:16.329 SYMLINK libspdk_event_sock.so 00:02:16.900 CC module/event/subsystems/accel/accel.o 00:02:16.900 LIB libspdk_event_accel.a 00:02:16.900 SO libspdk_event_accel.so.6.0 00:02:17.160 SYMLINK libspdk_event_accel.so 00:02:17.421 CC module/event/subsystems/bdev/bdev.o 00:02:17.681 LIB libspdk_event_bdev.a 00:02:17.681 SO libspdk_event_bdev.so.6.0 00:02:17.681 SYMLINK libspdk_event_bdev.so 00:02:17.941 CC module/event/subsystems/scsi/scsi.o 00:02:17.941 CC module/event/subsystems/nbd/nbd.o 00:02:17.941 CC module/event/subsystems/ublk/ublk.o 00:02:17.941 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:17.941 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:18.237 LIB libspdk_event_nbd.a 00:02:18.237 LIB libspdk_event_ublk.a 00:02:18.237 LIB libspdk_event_scsi.a 00:02:18.237 SO libspdk_event_nbd.so.6.0 00:02:18.237 SO libspdk_event_ublk.so.3.0 00:02:18.237 SO libspdk_event_scsi.so.6.0 00:02:18.237 LIB libspdk_event_nvmf.a 00:02:18.237 SYMLINK libspdk_event_nbd.so 00:02:18.237 SYMLINK libspdk_event_ublk.so 00:02:18.237 SYMLINK libspdk_event_scsi.so 00:02:18.237 SO libspdk_event_nvmf.so.6.0 00:02:18.497 SYMLINK libspdk_event_nvmf.so 00:02:18.757 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:18.757 CC module/event/subsystems/iscsi/iscsi.o 00:02:18.757 LIB libspdk_event_vhost_scsi.a 00:02:19.018 LIB libspdk_event_iscsi.a 00:02:19.018 SO libspdk_event_vhost_scsi.so.3.0 00:02:19.018 SO libspdk_event_iscsi.so.6.0 00:02:19.018 SYMLINK libspdk_event_vhost_scsi.so 00:02:19.018 SYMLINK libspdk_event_iscsi.so 00:02:19.279 SO libspdk.so.6.0 00:02:19.279 SYMLINK libspdk.so 00:02:19.541 CC app/trace_record/trace_record.o 00:02:19.541 CC app/spdk_lspci/spdk_lspci.o 00:02:19.541 CXX app/trace/trace.o 00:02:19.541 CC test/rpc_client/rpc_client_test.o 00:02:19.541 CC app/spdk_nvme_identify/identify.o 00:02:19.541 CC app/spdk_nvme_perf/perf.o 00:02:19.541 CC app/spdk_top/spdk_top.o 00:02:19.541 CC app/spdk_nvme_discover/discovery_aer.o 00:02:19.541 TEST_HEADER include/spdk/accel.h 00:02:19.541 TEST_HEADER include/spdk/accel_module.h 00:02:19.541 TEST_HEADER include/spdk/assert.h 00:02:19.541 TEST_HEADER include/spdk/barrier.h 00:02:19.541 TEST_HEADER include/spdk/base64.h 00:02:19.541 TEST_HEADER include/spdk/bdev.h 00:02:19.541 TEST_HEADER include/spdk/bdev_module.h 00:02:19.541 TEST_HEADER include/spdk/bdev_zone.h 00:02:19.541 TEST_HEADER include/spdk/bit_array.h 00:02:19.541 TEST_HEADER include/spdk/bit_pool.h 00:02:19.541 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:19.541 TEST_HEADER include/spdk/blob_bdev.h 00:02:19.541 TEST_HEADER include/spdk/blobfs.h 00:02:19.541 TEST_HEADER include/spdk/blob.h 00:02:19.541 TEST_HEADER include/spdk/conf.h 00:02:19.541 TEST_HEADER include/spdk/config.h 00:02:19.541 TEST_HEADER include/spdk/cpuset.h 00:02:19.541 TEST_HEADER include/spdk/crc16.h 00:02:19.541 TEST_HEADER include/spdk/crc32.h 00:02:19.541 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:19.541 TEST_HEADER include/spdk/crc64.h 00:02:19.541 TEST_HEADER include/spdk/dif.h 00:02:19.541 TEST_HEADER include/spdk/dma.h 00:02:19.541 TEST_HEADER include/spdk/endian.h 00:02:19.541 CC app/spdk_dd/spdk_dd.o 00:02:19.541 TEST_HEADER include/spdk/env_dpdk.h 00:02:19.541 CC app/nvmf_tgt/nvmf_main.o 00:02:19.541 TEST_HEADER include/spdk/env.h 00:02:20.378 TEST_HEADER include/spdk/event.h 00:02:20.378 TEST_HEADER include/spdk/fd_group.h 00:02:20.378 TEST_HEADER include/spdk/fd.h 00:02:20.378 TEST_HEADER include/spdk/file.h 00:02:20.378 TEST_HEADER include/spdk/fsdev.h 00:02:20.378 TEST_HEADER include/spdk/ftl.h 00:02:20.378 TEST_HEADER include/spdk/fsdev_module.h 00:02:20.378 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:20.378 TEST_HEADER include/spdk/gpt_spec.h 00:02:20.378 TEST_HEADER include/spdk/hexlify.h 00:02:20.378 TEST_HEADER include/spdk/histogram_data.h 00:02:20.378 TEST_HEADER include/spdk/idxd.h 00:02:20.378 CC app/iscsi_tgt/iscsi_tgt.o 00:02:20.378 TEST_HEADER include/spdk/idxd_spec.h 00:02:20.378 TEST_HEADER include/spdk/init.h 00:02:20.378 TEST_HEADER include/spdk/ioat.h 00:02:20.378 TEST_HEADER include/spdk/ioat_spec.h 00:02:20.378 TEST_HEADER include/spdk/json.h 00:02:20.378 TEST_HEADER include/spdk/iscsi_spec.h 00:02:20.378 TEST_HEADER include/spdk/keyring.h 00:02:20.378 TEST_HEADER include/spdk/jsonrpc.h 00:02:20.378 TEST_HEADER include/spdk/keyring_module.h 00:02:20.378 TEST_HEADER include/spdk/likely.h 00:02:20.378 TEST_HEADER include/spdk/log.h 00:02:20.378 TEST_HEADER include/spdk/lvol.h 00:02:20.378 CC app/spdk_tgt/spdk_tgt.o 00:02:20.378 TEST_HEADER include/spdk/md5.h 00:02:20.378 TEST_HEADER include/spdk/memory.h 00:02:20.378 TEST_HEADER include/spdk/mmio.h 00:02:20.378 TEST_HEADER include/spdk/net.h 00:02:20.378 TEST_HEADER include/spdk/nbd.h 00:02:20.378 TEST_HEADER include/spdk/notify.h 00:02:20.378 TEST_HEADER include/spdk/nvme.h 00:02:20.378 TEST_HEADER include/spdk/nvme_intel.h 00:02:20.378 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:20.378 TEST_HEADER include/spdk/nvme_spec.h 00:02:20.378 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:20.378 TEST_HEADER include/spdk/nvme_zns.h 00:02:20.378 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:20.378 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:20.378 TEST_HEADER include/spdk/nvmf.h 00:02:20.378 TEST_HEADER include/spdk/nvmf_spec.h 00:02:20.378 TEST_HEADER include/spdk/opal.h 00:02:20.378 TEST_HEADER include/spdk/nvmf_transport.h 00:02:20.378 TEST_HEADER include/spdk/opal_spec.h 00:02:20.378 TEST_HEADER include/spdk/pci_ids.h 00:02:20.378 TEST_HEADER include/spdk/queue.h 00:02:20.378 TEST_HEADER include/spdk/pipe.h 00:02:20.378 TEST_HEADER include/spdk/reduce.h 00:02:20.378 TEST_HEADER include/spdk/scheduler.h 00:02:20.378 TEST_HEADER include/spdk/rpc.h 00:02:20.378 TEST_HEADER include/spdk/scsi.h 00:02:20.378 TEST_HEADER include/spdk/scsi_spec.h 00:02:20.378 TEST_HEADER include/spdk/sock.h 00:02:20.378 TEST_HEADER include/spdk/stdinc.h 00:02:20.378 TEST_HEADER include/spdk/thread.h 00:02:20.378 TEST_HEADER include/spdk/string.h 00:02:20.378 TEST_HEADER include/spdk/trace.h 00:02:20.378 TEST_HEADER include/spdk/tree.h 00:02:20.378 TEST_HEADER include/spdk/trace_parser.h 00:02:20.378 TEST_HEADER include/spdk/ublk.h 00:02:20.378 TEST_HEADER include/spdk/uuid.h 00:02:20.378 TEST_HEADER include/spdk/util.h 00:02:20.378 TEST_HEADER include/spdk/version.h 00:02:20.378 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:20.378 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:20.378 TEST_HEADER include/spdk/vhost.h 00:02:20.378 TEST_HEADER include/spdk/vmd.h 00:02:20.379 TEST_HEADER include/spdk/xor.h 00:02:20.379 TEST_HEADER include/spdk/zipf.h 00:02:20.379 CXX test/cpp_headers/accel.o 00:02:20.379 CXX test/cpp_headers/accel_module.o 00:02:20.379 CXX test/cpp_headers/assert.o 00:02:20.379 CXX test/cpp_headers/barrier.o 00:02:20.379 CXX test/cpp_headers/base64.o 00:02:20.379 CXX test/cpp_headers/bdev.o 00:02:20.379 CXX test/cpp_headers/bdev_module.o 00:02:20.379 CXX test/cpp_headers/bdev_zone.o 00:02:20.379 CXX test/cpp_headers/bit_array.o 00:02:20.379 CXX test/cpp_headers/blobfs_bdev.o 00:02:20.379 CXX test/cpp_headers/bit_pool.o 00:02:20.379 CXX test/cpp_headers/blob_bdev.o 00:02:20.379 CXX test/cpp_headers/blobfs.o 00:02:20.379 CXX test/cpp_headers/blob.o 00:02:20.379 CXX test/cpp_headers/conf.o 00:02:20.379 CXX test/cpp_headers/config.o 00:02:20.379 CXX test/cpp_headers/cpuset.o 00:02:20.379 CXX test/cpp_headers/crc16.o 00:02:20.379 CXX test/cpp_headers/crc64.o 00:02:20.379 CXX test/cpp_headers/dma.o 00:02:20.379 CXX test/cpp_headers/crc32.o 00:02:20.379 CXX test/cpp_headers/dif.o 00:02:20.379 CXX test/cpp_headers/env_dpdk.o 00:02:20.379 CXX test/cpp_headers/env.o 00:02:20.379 CXX test/cpp_headers/endian.o 00:02:20.379 CXX test/cpp_headers/event.o 00:02:20.379 CXX test/cpp_headers/fd_group.o 00:02:20.379 CXX test/cpp_headers/file.o 00:02:20.379 CXX test/cpp_headers/fd.o 00:02:20.379 CXX test/cpp_headers/fsdev_module.o 00:02:20.379 CXX test/cpp_headers/fsdev.o 00:02:20.379 CXX test/cpp_headers/ftl.o 00:02:20.379 CXX test/cpp_headers/fuse_dispatcher.o 00:02:20.379 CXX test/cpp_headers/gpt_spec.o 00:02:20.379 CXX test/cpp_headers/hexlify.o 00:02:20.379 CXX test/cpp_headers/idxd.o 00:02:20.379 CXX test/cpp_headers/histogram_data.o 00:02:20.379 CXX test/cpp_headers/idxd_spec.o 00:02:20.379 CXX test/cpp_headers/init.o 00:02:20.379 CXX test/cpp_headers/iscsi_spec.o 00:02:20.379 CXX test/cpp_headers/ioat_spec.o 00:02:20.379 CXX test/cpp_headers/ioat.o 00:02:20.379 CXX test/cpp_headers/jsonrpc.o 00:02:20.379 CXX test/cpp_headers/keyring_module.o 00:02:20.379 CXX test/cpp_headers/keyring.o 00:02:20.379 CXX test/cpp_headers/likely.o 00:02:20.379 CXX test/cpp_headers/json.o 00:02:20.379 CXX test/cpp_headers/md5.o 00:02:20.379 CXX test/cpp_headers/log.o 00:02:20.379 CXX test/cpp_headers/lvol.o 00:02:20.379 CXX test/cpp_headers/memory.o 00:02:20.379 CC test/thread/poller_perf/poller_perf.o 00:02:20.379 CXX test/cpp_headers/nbd.o 00:02:20.379 CXX test/cpp_headers/notify.o 00:02:20.379 CXX test/cpp_headers/mmio.o 00:02:20.379 CXX test/cpp_headers/nvme_intel.o 00:02:20.379 CXX test/cpp_headers/nvme_ocssd.o 00:02:20.379 CXX test/cpp_headers/net.o 00:02:20.379 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:20.379 CXX test/cpp_headers/nvme_spec.o 00:02:20.379 CXX test/cpp_headers/nvmf_cmd.o 00:02:20.379 CXX test/cpp_headers/nvme.o 00:02:20.379 CXX test/cpp_headers/nvme_zns.o 00:02:20.379 CXX test/cpp_headers/nvmf.o 00:02:20.379 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:20.379 CXX test/cpp_headers/nvmf_transport.o 00:02:20.379 LINK spdk_lspci 00:02:20.379 CXX test/cpp_headers/opal.o 00:02:20.379 CXX test/cpp_headers/opal_spec.o 00:02:20.379 CXX test/cpp_headers/nvmf_spec.o 00:02:20.379 CC examples/ioat/perf/perf.o 00:02:20.379 CXX test/cpp_headers/pci_ids.o 00:02:20.379 CXX test/cpp_headers/queue.o 00:02:20.379 CXX test/cpp_headers/pipe.o 00:02:20.379 CXX test/cpp_headers/reduce.o 00:02:20.379 CXX test/cpp_headers/rpc.o 00:02:20.379 CC examples/util/zipf/zipf.o 00:02:20.379 CXX test/cpp_headers/scheduler.o 00:02:20.379 CC test/app/histogram_perf/histogram_perf.o 00:02:20.379 CXX test/cpp_headers/scsi_spec.o 00:02:20.379 CXX test/cpp_headers/scsi.o 00:02:20.379 CXX test/cpp_headers/stdinc.o 00:02:20.379 CXX test/cpp_headers/sock.o 00:02:20.379 CXX test/cpp_headers/string.o 00:02:20.379 CXX test/cpp_headers/thread.o 00:02:20.379 CXX test/cpp_headers/trace.o 00:02:20.379 CXX test/cpp_headers/trace_parser.o 00:02:20.379 CC test/dma/test_dma/test_dma.o 00:02:20.379 CXX test/cpp_headers/tree.o 00:02:20.379 CC test/app/jsoncat/jsoncat.o 00:02:20.379 CXX test/cpp_headers/ublk.o 00:02:20.379 CC examples/ioat/verify/verify.o 00:02:20.379 CXX test/cpp_headers/uuid.o 00:02:20.379 CXX test/cpp_headers/version.o 00:02:20.379 CC test/env/vtophys/vtophys.o 00:02:20.379 CC test/env/memory/memory_ut.o 00:02:20.379 CXX test/cpp_headers/util.o 00:02:20.379 CXX test/cpp_headers/vhost.o 00:02:20.379 CXX test/cpp_headers/vfio_user_pci.o 00:02:20.379 CXX test/cpp_headers/vmd.o 00:02:20.379 CXX test/cpp_headers/vfio_user_spec.o 00:02:20.379 CXX test/cpp_headers/xor.o 00:02:20.379 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:20.379 CC test/env/pci/pci_ut.o 00:02:20.379 CXX test/cpp_headers/zipf.o 00:02:20.379 CC test/app/bdev_svc/bdev_svc.o 00:02:20.379 CC app/fio/nvme/fio_plugin.o 00:02:20.379 CC test/app/stub/stub.o 00:02:20.379 CC app/fio/bdev/fio_plugin.o 00:02:20.379 LINK rpc_client_test 00:02:20.379 LINK nvmf_tgt 00:02:20.379 LINK iscsi_tgt 00:02:20.379 LINK spdk_trace_record 00:02:20.379 LINK interrupt_tgt 00:02:20.379 LINK spdk_nvme_discover 00:02:20.652 LINK jsoncat 00:02:20.652 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:20.652 LINK spdk_dd 00:02:20.652 LINK spdk_tgt 00:02:20.652 LINK poller_perf 00:02:20.652 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:20.652 CC test/env/mem_callbacks/mem_callbacks.o 00:02:20.914 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:20.914 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:20.914 LINK histogram_perf 00:02:20.914 LINK vtophys 00:02:20.914 LINK spdk_trace 00:02:21.174 LINK bdev_svc 00:02:21.174 LINK verify 00:02:21.174 LINK zipf 00:02:21.435 LINK env_dpdk_post_init 00:02:21.435 LINK spdk_nvme 00:02:21.435 LINK stub 00:02:21.435 LINK ioat_perf 00:02:21.435 CC test/event/reactor/reactor.o 00:02:21.435 CC test/event/app_repeat/app_repeat.o 00:02:21.435 CC test/event/event_perf/event_perf.o 00:02:21.435 CC test/event/reactor_perf/reactor_perf.o 00:02:21.435 LINK test_dma 00:02:21.435 LINK vhost_fuzz 00:02:21.435 CC test/event/scheduler/scheduler.o 00:02:21.435 CC app/vhost/vhost.o 00:02:21.435 LINK spdk_top 00:02:21.435 LINK reactor 00:02:21.435 LINK nvme_fuzz 00:02:21.695 LINK reactor_perf 00:02:21.695 LINK event_perf 00:02:21.695 LINK mem_callbacks 00:02:21.695 LINK app_repeat 00:02:21.695 LINK pci_ut 00:02:21.695 LINK spdk_nvme_identify 00:02:21.695 LINK spdk_bdev 00:02:21.695 LINK vhost 00:02:21.695 LINK scheduler 00:02:21.695 CC examples/idxd/perf/perf.o 00:02:21.695 CC examples/vmd/lsvmd/lsvmd.o 00:02:21.695 CC examples/sock/hello_world/hello_sock.o 00:02:21.695 CC examples/vmd/led/led.o 00:02:21.695 CC examples/thread/thread/thread_ex.o 00:02:21.956 LINK spdk_nvme_perf 00:02:21.956 LINK lsvmd 00:02:21.956 LINK led 00:02:21.956 CC test/nvme/reset/reset.o 00:02:21.956 CC test/nvme/e2edp/nvme_dp.o 00:02:21.956 CC test/nvme/aer/aer.o 00:02:21.956 CC test/nvme/sgl/sgl.o 00:02:21.956 CC test/nvme/err_injection/err_injection.o 00:02:21.956 CC test/nvme/overhead/overhead.o 00:02:21.956 CC test/nvme/fdp/fdp.o 00:02:21.956 CC test/nvme/cuse/cuse.o 00:02:21.956 CC test/nvme/compliance/nvme_compliance.o 00:02:21.956 CC test/nvme/reserve/reserve.o 00:02:21.956 CC test/nvme/simple_copy/simple_copy.o 00:02:21.956 CC test/nvme/fused_ordering/fused_ordering.o 00:02:21.956 CC test/nvme/startup/startup.o 00:02:21.956 CC test/nvme/connect_stress/connect_stress.o 00:02:21.956 CC test/nvme/boot_partition/boot_partition.o 00:02:21.956 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:22.217 CC test/blobfs/mkfs/mkfs.o 00:02:22.217 CC test/accel/dif/dif.o 00:02:22.217 LINK memory_ut 00:02:22.217 LINK hello_sock 00:02:22.217 LINK thread 00:02:22.217 LINK idxd_perf 00:02:22.217 CC test/lvol/esnap/esnap.o 00:02:22.217 LINK boot_partition 00:02:22.217 LINK startup 00:02:22.478 LINK reserve 00:02:22.478 LINK connect_stress 00:02:22.478 LINK err_injection 00:02:22.478 LINK fused_ordering 00:02:22.478 LINK doorbell_aers 00:02:22.478 LINK mkfs 00:02:22.478 LINK nvme_dp 00:02:22.478 LINK simple_copy 00:02:22.478 LINK reset 00:02:22.478 LINK aer 00:02:22.478 LINK sgl 00:02:22.478 LINK overhead 00:02:22.478 LINK fdp 00:02:22.478 LINK nvme_compliance 00:02:22.478 LINK iscsi_fuzz 00:02:22.741 CC examples/nvme/arbitration/arbitration.o 00:02:22.741 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:22.741 CC examples/nvme/hello_world/hello_world.o 00:02:22.741 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:22.741 CC examples/nvme/reconnect/reconnect.o 00:02:22.741 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:22.741 CC examples/nvme/hotplug/hotplug.o 00:02:22.741 CC examples/nvme/abort/abort.o 00:02:22.741 LINK dif 00:02:22.741 CC examples/accel/perf/accel_perf.o 00:02:22.741 CC examples/blob/cli/blobcli.o 00:02:22.741 CC examples/blob/hello_world/hello_blob.o 00:02:22.741 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:22.741 LINK cmb_copy 00:02:23.002 LINK pmr_persistence 00:02:23.002 LINK hello_world 00:02:23.002 LINK hotplug 00:02:23.002 LINK arbitration 00:02:23.002 LINK reconnect 00:02:23.002 LINK abort 00:02:23.002 LINK hello_blob 00:02:23.263 LINK hello_fsdev 00:02:23.263 LINK nvme_manage 00:02:23.263 LINK accel_perf 00:02:23.263 LINK blobcli 00:02:23.263 LINK cuse 00:02:23.263 CC test/bdev/bdevio/bdevio.o 00:02:23.835 LINK bdevio 00:02:23.835 CC examples/bdev/hello_world/hello_bdev.o 00:02:23.835 CC examples/bdev/bdevperf/bdevperf.o 00:02:24.096 LINK hello_bdev 00:02:24.668 LINK bdevperf 00:02:25.240 CC examples/nvmf/nvmf/nvmf.o 00:02:25.501 LINK nvmf 00:02:26.886 LINK esnap 00:02:27.147 00:02:27.147 real 0m56.350s 00:02:27.147 user 8m10.250s 00:02:27.147 sys 6m10.573s 00:02:27.147 15:14:44 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:27.147 15:14:44 make -- common/autotest_common.sh@10 -- $ set +x 00:02:27.147 ************************************ 00:02:27.147 END TEST make 00:02:27.147 ************************************ 00:02:27.147 15:14:45 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:27.147 15:14:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:27.147 15:14:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:27.147 15:14:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.147 15:14:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:27.147 15:14:45 -- pm/common@44 -- $ pid=3445468 00:02:27.147 15:14:45 -- pm/common@50 -- $ kill -TERM 3445468 00:02:27.147 15:14:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.147 15:14:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:27.147 15:14:45 -- pm/common@44 -- $ pid=3445469 00:02:27.147 15:14:45 -- pm/common@50 -- $ kill -TERM 3445469 00:02:27.147 15:14:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.147 15:14:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:27.147 15:14:45 -- pm/common@44 -- $ pid=3445471 00:02:27.147 15:14:45 -- pm/common@50 -- $ kill -TERM 3445471 00:02:27.147 15:14:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.147 15:14:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:27.147 15:14:45 -- pm/common@44 -- $ pid=3445495 00:02:27.147 15:14:45 -- pm/common@50 -- $ sudo -E kill -TERM 3445495 00:02:27.147 15:14:45 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:27.147 15:14:45 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:27.410 15:14:45 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:27.410 15:14:45 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:27.410 15:14:45 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:27.410 15:14:45 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:27.410 15:14:45 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:27.410 15:14:45 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:27.410 15:14:45 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:27.410 15:14:45 -- scripts/common.sh@336 -- # IFS=.-: 00:02:27.410 15:14:45 -- scripts/common.sh@336 -- # read -ra ver1 00:02:27.410 15:14:45 -- scripts/common.sh@337 -- # IFS=.-: 00:02:27.410 15:14:45 -- scripts/common.sh@337 -- # read -ra ver2 00:02:27.410 15:14:45 -- scripts/common.sh@338 -- # local 'op=<' 00:02:27.410 15:14:45 -- scripts/common.sh@340 -- # ver1_l=2 00:02:27.410 15:14:45 -- scripts/common.sh@341 -- # ver2_l=1 00:02:27.410 15:14:45 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:27.410 15:14:45 -- scripts/common.sh@344 -- # case "$op" in 00:02:27.410 15:14:45 -- scripts/common.sh@345 -- # : 1 00:02:27.410 15:14:45 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:27.410 15:14:45 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:27.410 15:14:45 -- scripts/common.sh@365 -- # decimal 1 00:02:27.410 15:14:45 -- scripts/common.sh@353 -- # local d=1 00:02:27.410 15:14:45 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:27.410 15:14:45 -- scripts/common.sh@355 -- # echo 1 00:02:27.410 15:14:45 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:27.410 15:14:45 -- scripts/common.sh@366 -- # decimal 2 00:02:27.410 15:14:45 -- scripts/common.sh@353 -- # local d=2 00:02:27.410 15:14:45 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:27.410 15:14:45 -- scripts/common.sh@355 -- # echo 2 00:02:27.410 15:14:45 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:27.410 15:14:45 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:27.410 15:14:45 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:27.410 15:14:45 -- scripts/common.sh@368 -- # return 0 00:02:27.410 15:14:45 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:27.410 15:14:45 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:27.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:27.410 --rc genhtml_branch_coverage=1 00:02:27.410 --rc genhtml_function_coverage=1 00:02:27.410 --rc genhtml_legend=1 00:02:27.410 --rc geninfo_all_blocks=1 00:02:27.410 --rc geninfo_unexecuted_blocks=1 00:02:27.410 00:02:27.410 ' 00:02:27.410 15:14:45 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:27.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:27.410 --rc genhtml_branch_coverage=1 00:02:27.410 --rc genhtml_function_coverage=1 00:02:27.410 --rc genhtml_legend=1 00:02:27.410 --rc geninfo_all_blocks=1 00:02:27.410 --rc geninfo_unexecuted_blocks=1 00:02:27.410 00:02:27.410 ' 00:02:27.410 15:14:45 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:27.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:27.410 --rc genhtml_branch_coverage=1 00:02:27.410 --rc genhtml_function_coverage=1 00:02:27.410 --rc genhtml_legend=1 00:02:27.410 --rc geninfo_all_blocks=1 00:02:27.410 --rc geninfo_unexecuted_blocks=1 00:02:27.410 00:02:27.410 ' 00:02:27.410 15:14:45 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:27.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:27.410 --rc genhtml_branch_coverage=1 00:02:27.410 --rc genhtml_function_coverage=1 00:02:27.410 --rc genhtml_legend=1 00:02:27.410 --rc geninfo_all_blocks=1 00:02:27.410 --rc geninfo_unexecuted_blocks=1 00:02:27.410 00:02:27.410 ' 00:02:27.410 15:14:45 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:27.410 15:14:45 -- nvmf/common.sh@7 -- # uname -s 00:02:27.410 15:14:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:27.410 15:14:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:27.410 15:14:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:27.410 15:14:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:27.410 15:14:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:27.410 15:14:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:27.410 15:14:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:27.410 15:14:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:27.410 15:14:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:27.410 15:14:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:27.410 15:14:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:02:27.410 15:14:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:02:27.410 15:14:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:27.410 15:14:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:27.410 15:14:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:27.410 15:14:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:27.410 15:14:45 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:27.410 15:14:45 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:27.410 15:14:45 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:27.410 15:14:45 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:27.410 15:14:45 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:27.410 15:14:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.410 15:14:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.410 15:14:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.410 15:14:45 -- paths/export.sh@5 -- # export PATH 00:02:27.410 15:14:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.411 15:14:45 -- nvmf/common.sh@51 -- # : 0 00:02:27.411 15:14:45 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:27.411 15:14:45 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:27.411 15:14:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:27.411 15:14:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:27.411 15:14:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:27.411 15:14:45 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:27.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:27.411 15:14:45 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:27.411 15:14:45 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:27.411 15:14:45 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:27.411 15:14:45 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:27.411 15:14:45 -- spdk/autotest.sh@32 -- # uname -s 00:02:27.411 15:14:45 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:27.411 15:14:45 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:27.411 15:14:45 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:27.411 15:14:45 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:27.411 15:14:45 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:27.411 15:14:45 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:27.411 15:14:45 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:27.411 15:14:45 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:27.411 15:14:45 -- spdk/autotest.sh@48 -- # udevadm_pid=3511010 00:02:27.411 15:14:45 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:27.411 15:14:45 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:27.411 15:14:45 -- pm/common@17 -- # local monitor 00:02:27.411 15:14:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.411 15:14:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.411 15:14:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.411 15:14:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.411 15:14:45 -- pm/common@21 -- # date +%s 00:02:27.411 15:14:45 -- pm/common@25 -- # sleep 1 00:02:27.411 15:14:45 -- pm/common@21 -- # date +%s 00:02:27.411 15:14:45 -- pm/common@21 -- # date +%s 00:02:27.411 15:14:45 -- pm/common@21 -- # date +%s 00:02:27.411 15:14:45 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730902485 00:02:27.411 15:14:45 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730902485 00:02:27.411 15:14:45 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730902485 00:02:27.411 15:14:45 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730902485 00:02:27.411 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730902485_collect-cpu-load.pm.log 00:02:27.672 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730902485_collect-vmstat.pm.log 00:02:27.672 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730902485_collect-cpu-temp.pm.log 00:02:27.672 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730902485_collect-bmc-pm.bmc.pm.log 00:02:28.614 15:14:46 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:28.614 15:14:46 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:28.614 15:14:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:28.614 15:14:46 -- common/autotest_common.sh@10 -- # set +x 00:02:28.614 15:14:46 -- spdk/autotest.sh@59 -- # create_test_list 00:02:28.614 15:14:46 -- common/autotest_common.sh@750 -- # xtrace_disable 00:02:28.614 15:14:46 -- common/autotest_common.sh@10 -- # set +x 00:02:28.614 15:14:46 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:28.614 15:14:46 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.614 15:14:46 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.614 15:14:46 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:28.614 15:14:46 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.614 15:14:46 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:28.614 15:14:46 -- common/autotest_common.sh@1455 -- # uname 00:02:28.614 15:14:46 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:28.614 15:14:46 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:28.614 15:14:46 -- common/autotest_common.sh@1475 -- # uname 00:02:28.614 15:14:46 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:28.614 15:14:46 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:28.614 15:14:46 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:28.614 lcov: LCOV version 1.15 00:02:28.614 15:14:46 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:55.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:55.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:59.404 15:15:16 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:59.404 15:15:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:59.404 15:15:16 -- common/autotest_common.sh@10 -- # set +x 00:02:59.404 15:15:16 -- spdk/autotest.sh@78 -- # rm -f 00:02:59.405 15:15:16 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:02.708 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:02.708 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:02.708 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:02.708 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:02.708 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:02.708 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:02.708 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:02.708 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:02.708 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:02.708 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:02.708 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:02.708 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:02.708 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:02.708 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:02.708 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:02.708 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:02.969 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:03.230 15:15:20 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:03.230 15:15:20 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:03.230 15:15:20 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:03.230 15:15:20 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:03.230 15:15:20 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:03.230 15:15:20 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:03.230 15:15:20 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:03.230 15:15:20 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:03.230 15:15:20 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:03.230 15:15:20 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:03.230 15:15:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:03.230 15:15:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:03.230 15:15:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:03.230 15:15:20 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:03.230 15:15:20 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:03.230 No valid GPT data, bailing 00:03:03.230 15:15:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:03.230 15:15:21 -- scripts/common.sh@394 -- # pt= 00:03:03.230 15:15:21 -- scripts/common.sh@395 -- # return 1 00:03:03.230 15:15:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:03.230 1+0 records in 00:03:03.230 1+0 records out 00:03:03.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00171315 s, 612 MB/s 00:03:03.230 15:15:21 -- spdk/autotest.sh@105 -- # sync 00:03:03.230 15:15:21 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:03.230 15:15:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:03.230 15:15:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:13.228 15:15:29 -- spdk/autotest.sh@111 -- # uname -s 00:03:13.228 15:15:29 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:13.228 15:15:29 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:13.228 15:15:29 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:15.138 Hugepages 00:03:15.138 node hugesize free / total 00:03:15.138 node0 1048576kB 0 / 0 00:03:15.398 node0 2048kB 0 / 0 00:03:15.398 node1 1048576kB 0 / 0 00:03:15.398 node1 2048kB 0 / 0 00:03:15.398 00:03:15.398 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:15.398 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:15.398 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:15.398 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:15.398 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:15.398 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:15.398 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:15.398 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:15.398 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:15.398 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:15.398 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:15.398 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:15.398 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:15.398 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:15.398 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:15.398 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:15.398 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:15.398 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:15.398 15:15:33 -- spdk/autotest.sh@117 -- # uname -s 00:03:15.398 15:15:33 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:15.398 15:15:33 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:15.398 15:15:33 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:19.596 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:19.596 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:19.596 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:19.596 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:19.596 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:19.596 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:19.596 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:19.596 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:19.596 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:19.596 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:19.596 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:19.596 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:19.596 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:19.596 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:19.596 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:19.596 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:20.977 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:21.237 15:15:39 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:22.189 15:15:40 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:22.189 15:15:40 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:22.189 15:15:40 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:22.189 15:15:40 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:22.189 15:15:40 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:22.189 15:15:40 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:22.189 15:15:40 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:22.189 15:15:40 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:22.189 15:15:40 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:22.449 15:15:40 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:22.449 15:15:40 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:22.449 15:15:40 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:25.859 Waiting for block devices as requested 00:03:25.859 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:25.859 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:26.118 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:26.118 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:26.118 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:26.378 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:26.378 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:26.378 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:26.638 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:26.638 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:26.898 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:26.898 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:26.898 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:27.157 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:27.157 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:27.157 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:27.417 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:27.677 15:15:45 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:27.677 15:15:45 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:27.677 15:15:45 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:27.677 15:15:45 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:03:27.677 15:15:45 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:27.677 15:15:45 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:27.677 15:15:45 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:27.677 15:15:45 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:27.677 15:15:45 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:27.677 15:15:45 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:27.677 15:15:45 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:27.677 15:15:45 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:27.677 15:15:45 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:27.677 15:15:45 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:03:27.677 15:15:45 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:27.677 15:15:45 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:27.677 15:15:45 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:27.677 15:15:45 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:27.677 15:15:45 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:27.677 15:15:45 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:27.677 15:15:45 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:27.678 15:15:45 -- common/autotest_common.sh@1541 -- # continue 00:03:27.678 15:15:45 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:27.678 15:15:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:27.678 15:15:45 -- common/autotest_common.sh@10 -- # set +x 00:03:27.678 15:15:45 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:27.678 15:15:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:27.678 15:15:45 -- common/autotest_common.sh@10 -- # set +x 00:03:27.678 15:15:45 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:31.877 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:31.877 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:31.877 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:31.877 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:31.877 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:31.877 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:31.877 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:31.877 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:31.877 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:31.877 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:31.877 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:31.877 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:31.877 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:31.877 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:31.877 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:31.877 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:31.877 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:31.877 15:15:49 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:31.877 15:15:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:31.877 15:15:49 -- common/autotest_common.sh@10 -- # set +x 00:03:31.877 15:15:49 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:31.877 15:15:49 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:31.877 15:15:49 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:31.877 15:15:49 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:31.877 15:15:49 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:31.877 15:15:49 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:31.877 15:15:49 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:31.877 15:15:49 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:31.877 15:15:49 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:31.877 15:15:49 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:31.877 15:15:49 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:31.877 15:15:49 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:31.877 15:15:49 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:31.877 15:15:49 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:31.877 15:15:49 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:31.877 15:15:49 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:31.877 15:15:49 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:31.877 15:15:49 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:03:31.877 15:15:49 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:31.877 15:15:49 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:31.877 15:15:49 -- common/autotest_common.sh@1570 -- # return 0 00:03:31.877 15:15:49 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:31.877 15:15:49 -- common/autotest_common.sh@1578 -- # return 0 00:03:31.877 15:15:49 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:31.877 15:15:49 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:31.877 15:15:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:31.877 15:15:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:31.877 15:15:49 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:31.877 15:15:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:31.877 15:15:49 -- common/autotest_common.sh@10 -- # set +x 00:03:31.877 15:15:49 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:31.877 15:15:49 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:31.877 15:15:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:31.877 15:15:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:31.877 15:15:49 -- common/autotest_common.sh@10 -- # set +x 00:03:31.877 ************************************ 00:03:31.877 START TEST env 00:03:31.877 ************************************ 00:03:31.877 15:15:49 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:32.137 * Looking for test storage... 00:03:32.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:32.137 15:15:49 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:32.137 15:15:49 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:32.137 15:15:49 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:32.137 15:15:50 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:32.137 15:15:50 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:32.137 15:15:50 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:32.137 15:15:50 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:32.137 15:15:50 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:32.137 15:15:50 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:32.137 15:15:50 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:32.137 15:15:50 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:32.137 15:15:50 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:32.137 15:15:50 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:32.137 15:15:50 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:32.137 15:15:50 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:32.137 15:15:50 env -- scripts/common.sh@344 -- # case "$op" in 00:03:32.137 15:15:50 env -- scripts/common.sh@345 -- # : 1 00:03:32.137 15:15:50 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:32.137 15:15:50 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:32.137 15:15:50 env -- scripts/common.sh@365 -- # decimal 1 00:03:32.137 15:15:50 env -- scripts/common.sh@353 -- # local d=1 00:03:32.137 15:15:50 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:32.137 15:15:50 env -- scripts/common.sh@355 -- # echo 1 00:03:32.137 15:15:50 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:32.137 15:15:50 env -- scripts/common.sh@366 -- # decimal 2 00:03:32.137 15:15:50 env -- scripts/common.sh@353 -- # local d=2 00:03:32.137 15:15:50 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:32.137 15:15:50 env -- scripts/common.sh@355 -- # echo 2 00:03:32.137 15:15:50 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:32.137 15:15:50 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:32.137 15:15:50 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:32.137 15:15:50 env -- scripts/common.sh@368 -- # return 0 00:03:32.137 15:15:50 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:32.137 15:15:50 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:32.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.137 --rc genhtml_branch_coverage=1 00:03:32.137 --rc genhtml_function_coverage=1 00:03:32.137 --rc genhtml_legend=1 00:03:32.137 --rc geninfo_all_blocks=1 00:03:32.137 --rc geninfo_unexecuted_blocks=1 00:03:32.137 00:03:32.137 ' 00:03:32.137 15:15:50 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:32.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.137 --rc genhtml_branch_coverage=1 00:03:32.137 --rc genhtml_function_coverage=1 00:03:32.137 --rc genhtml_legend=1 00:03:32.137 --rc geninfo_all_blocks=1 00:03:32.137 --rc geninfo_unexecuted_blocks=1 00:03:32.137 00:03:32.137 ' 00:03:32.137 15:15:50 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:32.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.137 --rc genhtml_branch_coverage=1 00:03:32.137 --rc genhtml_function_coverage=1 00:03:32.137 --rc genhtml_legend=1 00:03:32.137 --rc geninfo_all_blocks=1 00:03:32.138 --rc geninfo_unexecuted_blocks=1 00:03:32.138 00:03:32.138 ' 00:03:32.138 15:15:50 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:32.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.138 --rc genhtml_branch_coverage=1 00:03:32.138 --rc genhtml_function_coverage=1 00:03:32.138 --rc genhtml_legend=1 00:03:32.138 --rc geninfo_all_blocks=1 00:03:32.138 --rc geninfo_unexecuted_blocks=1 00:03:32.138 00:03:32.138 ' 00:03:32.138 15:15:50 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:32.138 15:15:50 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:32.138 15:15:50 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:32.138 15:15:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:32.138 ************************************ 00:03:32.138 START TEST env_memory 00:03:32.138 ************************************ 00:03:32.138 15:15:50 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:32.138 00:03:32.138 00:03:32.138 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.138 http://cunit.sourceforge.net/ 00:03:32.138 00:03:32.138 00:03:32.138 Suite: memory 00:03:32.398 Test: alloc and free memory map ...[2024-11-06 15:15:50.138766] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:32.398 passed 00:03:32.398 Test: mem map translation ...[2024-11-06 15:15:50.164406] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:32.398 [2024-11-06 15:15:50.164432] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:32.398 [2024-11-06 15:15:50.164480] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:32.398 [2024-11-06 15:15:50.164488] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:32.398 passed 00:03:32.398 Test: mem map registration ...[2024-11-06 15:15:50.219743] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:32.398 [2024-11-06 15:15:50.219786] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:32.398 passed 00:03:32.398 Test: mem map adjacent registrations ...passed 00:03:32.398 00:03:32.398 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.398 suites 1 1 n/a 0 0 00:03:32.398 tests 4 4 4 0 0 00:03:32.398 asserts 152 152 152 0 n/a 00:03:32.398 00:03:32.398 Elapsed time = 0.191 seconds 00:03:32.398 00:03:32.398 real 0m0.206s 00:03:32.398 user 0m0.193s 00:03:32.398 sys 0m0.012s 00:03:32.398 15:15:50 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:32.398 15:15:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:32.398 ************************************ 00:03:32.398 END TEST env_memory 00:03:32.398 ************************************ 00:03:32.398 15:15:50 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:32.398 15:15:50 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:32.398 15:15:50 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:32.398 15:15:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:32.398 ************************************ 00:03:32.399 START TEST env_vtophys 00:03:32.399 ************************************ 00:03:32.399 15:15:50 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:32.660 EAL: lib.eal log level changed from notice to debug 00:03:32.660 EAL: Detected lcore 0 as core 0 on socket 0 00:03:32.660 EAL: Detected lcore 1 as core 1 on socket 0 00:03:32.660 EAL: Detected lcore 2 as core 2 on socket 0 00:03:32.660 EAL: Detected lcore 3 as core 3 on socket 0 00:03:32.660 EAL: Detected lcore 4 as core 4 on socket 0 00:03:32.660 EAL: Detected lcore 5 as core 5 on socket 0 00:03:32.660 EAL: Detected lcore 6 as core 6 on socket 0 00:03:32.660 EAL: Detected lcore 7 as core 7 on socket 0 00:03:32.661 EAL: Detected lcore 8 as core 8 on socket 0 00:03:32.661 EAL: Detected lcore 9 as core 9 on socket 0 00:03:32.661 EAL: Detected lcore 10 as core 10 on socket 0 00:03:32.661 EAL: Detected lcore 11 as core 11 on socket 0 00:03:32.661 EAL: Detected lcore 12 as core 12 on socket 0 00:03:32.661 EAL: Detected lcore 13 as core 13 on socket 0 00:03:32.661 EAL: Detected lcore 14 as core 14 on socket 0 00:03:32.661 EAL: Detected lcore 15 as core 15 on socket 0 00:03:32.661 EAL: Detected lcore 16 as core 16 on socket 0 00:03:32.661 EAL: Detected lcore 17 as core 17 on socket 0 00:03:32.661 EAL: Detected lcore 18 as core 18 on socket 0 00:03:32.661 EAL: Detected lcore 19 as core 19 on socket 0 00:03:32.661 EAL: Detected lcore 20 as core 20 on socket 0 00:03:32.661 EAL: Detected lcore 21 as core 21 on socket 0 00:03:32.661 EAL: Detected lcore 22 as core 22 on socket 0 00:03:32.661 EAL: Detected lcore 23 as core 23 on socket 0 00:03:32.661 EAL: Detected lcore 24 as core 24 on socket 0 00:03:32.661 EAL: Detected lcore 25 as core 25 on socket 0 00:03:32.661 EAL: Detected lcore 26 as core 26 on socket 0 00:03:32.661 EAL: Detected lcore 27 as core 27 on socket 0 00:03:32.661 EAL: Detected lcore 28 as core 28 on socket 0 00:03:32.661 EAL: Detected lcore 29 as core 29 on socket 0 00:03:32.661 EAL: Detected lcore 30 as core 30 on socket 0 00:03:32.661 EAL: Detected lcore 31 as core 31 on socket 0 00:03:32.661 EAL: Detected lcore 32 as core 32 on socket 0 00:03:32.661 EAL: Detected lcore 33 as core 33 on socket 0 00:03:32.661 EAL: Detected lcore 34 as core 34 on socket 0 00:03:32.661 EAL: Detected lcore 35 as core 35 on socket 0 00:03:32.661 EAL: Detected lcore 36 as core 0 on socket 1 00:03:32.661 EAL: Detected lcore 37 as core 1 on socket 1 00:03:32.661 EAL: Detected lcore 38 as core 2 on socket 1 00:03:32.661 EAL: Detected lcore 39 as core 3 on socket 1 00:03:32.661 EAL: Detected lcore 40 as core 4 on socket 1 00:03:32.661 EAL: Detected lcore 41 as core 5 on socket 1 00:03:32.662 EAL: Detected lcore 42 as core 6 on socket 1 00:03:32.662 EAL: Detected lcore 43 as core 7 on socket 1 00:03:32.662 EAL: Detected lcore 44 as core 8 on socket 1 00:03:32.662 EAL: Detected lcore 45 as core 9 on socket 1 00:03:32.662 EAL: Detected lcore 46 as core 10 on socket 1 00:03:32.662 EAL: Detected lcore 47 as core 11 on socket 1 00:03:32.662 EAL: Detected lcore 48 as core 12 on socket 1 00:03:32.662 EAL: Detected lcore 49 as core 13 on socket 1 00:03:32.663 EAL: Detected lcore 50 as core 14 on socket 1 00:03:32.663 EAL: Detected lcore 51 as core 15 on socket 1 00:03:32.663 EAL: Detected lcore 52 as core 16 on socket 1 00:03:32.663 EAL: Detected lcore 53 as core 17 on socket 1 00:03:32.663 EAL: Detected lcore 54 as core 18 on socket 1 00:03:32.663 EAL: Detected lcore 55 as core 19 on socket 1 00:03:32.663 EAL: Detected lcore 56 as core 20 on socket 1 00:03:32.663 EAL: Detected lcore 57 as core 21 on socket 1 00:03:32.663 EAL: Detected lcore 58 as core 22 on socket 1 00:03:32.663 EAL: Detected lcore 59 as core 23 on socket 1 00:03:32.663 EAL: Detected lcore 60 as core 24 on socket 1 00:03:32.664 EAL: Detected lcore 61 as core 25 on socket 1 00:03:32.664 EAL: Detected lcore 62 as core 26 on socket 1 00:03:32.664 EAL: Detected lcore 63 as core 27 on socket 1 00:03:32.664 EAL: Detected lcore 64 as core 28 on socket 1 00:03:32.664 EAL: Detected lcore 65 as core 29 on socket 1 00:03:32.664 EAL: Detected lcore 66 as core 30 on socket 1 00:03:32.664 EAL: Detected lcore 67 as core 31 on socket 1 00:03:32.664 EAL: Detected lcore 68 as core 32 on socket 1 00:03:32.664 EAL: Detected lcore 69 as core 33 on socket 1 00:03:32.664 EAL: Detected lcore 70 as core 34 on socket 1 00:03:32.665 EAL: Detected lcore 71 as core 35 on socket 1 00:03:32.665 EAL: Detected lcore 72 as core 0 on socket 0 00:03:32.665 EAL: Detected lcore 73 as core 1 on socket 0 00:03:32.665 EAL: Detected lcore 74 as core 2 on socket 0 00:03:32.665 EAL: Detected lcore 75 as core 3 on socket 0 00:03:32.665 EAL: Detected lcore 76 as core 4 on socket 0 00:03:32.665 EAL: Detected lcore 77 as core 5 on socket 0 00:03:32.665 EAL: Detected lcore 78 as core 6 on socket 0 00:03:32.665 EAL: Detected lcore 79 as core 7 on socket 0 00:03:32.666 EAL: Detected lcore 80 as core 8 on socket 0 00:03:32.666 EAL: Detected lcore 81 as core 9 on socket 0 00:03:32.666 EAL: Detected lcore 82 as core 10 on socket 0 00:03:32.666 EAL: Detected lcore 83 as core 11 on socket 0 00:03:32.666 EAL: Detected lcore 84 as core 12 on socket 0 00:03:32.666 EAL: Detected lcore 85 as core 13 on socket 0 00:03:32.666 EAL: Detected lcore 86 as core 14 on socket 0 00:03:32.666 EAL: Detected lcore 87 as core 15 on socket 0 00:03:32.666 EAL: Detected lcore 88 as core 16 on socket 0 00:03:32.666 EAL: Detected lcore 89 as core 17 on socket 0 00:03:32.666 EAL: Detected lcore 90 as core 18 on socket 0 00:03:32.666 EAL: Detected lcore 91 as core 19 on socket 0 00:03:32.666 EAL: Detected lcore 92 as core 20 on socket 0 00:03:32.667 EAL: Detected lcore 93 as core 21 on socket 0 00:03:32.667 EAL: Detected lcore 94 as core 22 on socket 0 00:03:32.667 EAL: Detected lcore 95 as core 23 on socket 0 00:03:32.667 EAL: Detected lcore 96 as core 24 on socket 0 00:03:32.667 EAL: Detected lcore 97 as core 25 on socket 0 00:03:32.667 EAL: Detected lcore 98 as core 26 on socket 0 00:03:32.667 EAL: Detected lcore 99 as core 27 on socket 0 00:03:32.667 EAL: Detected lcore 100 as core 28 on socket 0 00:03:32.667 EAL: Detected lcore 101 as core 29 on socket 0 00:03:32.667 EAL: Detected lcore 102 as core 30 on socket 0 00:03:32.667 EAL: Detected lcore 103 as core 31 on socket 0 00:03:32.668 EAL: Detected lcore 104 as core 32 on socket 0 00:03:32.668 EAL: Detected lcore 105 as core 33 on socket 0 00:03:32.668 EAL: Detected lcore 106 as core 34 on socket 0 00:03:32.668 EAL: Detected lcore 107 as core 35 on socket 0 00:03:32.668 EAL: Detected lcore 108 as core 0 on socket 1 00:03:32.668 EAL: Detected lcore 109 as core 1 on socket 1 00:03:32.668 EAL: Detected lcore 110 as core 2 on socket 1 00:03:32.668 EAL: Detected lcore 111 as core 3 on socket 1 00:03:32.668 EAL: Detected lcore 112 as core 4 on socket 1 00:03:32.668 EAL: Detected lcore 113 as core 5 on socket 1 00:03:32.668 EAL: Detected lcore 114 as core 6 on socket 1 00:03:32.668 EAL: Detected lcore 115 as core 7 on socket 1 00:03:32.668 EAL: Detected lcore 116 as core 8 on socket 1 00:03:32.668 EAL: Detected lcore 117 as core 9 on socket 1 00:03:32.668 EAL: Detected lcore 118 as core 10 on socket 1 00:03:32.669 EAL: Detected lcore 119 as core 11 on socket 1 00:03:32.669 EAL: Detected lcore 120 as core 12 on socket 1 00:03:32.669 EAL: Detected lcore 121 as core 13 on socket 1 00:03:32.669 EAL: Detected lcore 122 as core 14 on socket 1 00:03:32.669 EAL: Detected lcore 123 as core 15 on socket 1 00:03:32.669 EAL: Detected lcore 124 as core 16 on socket 1 00:03:32.669 EAL: Detected lcore 125 as core 17 on socket 1 00:03:32.669 EAL: Detected lcore 126 as core 18 on socket 1 00:03:32.669 EAL: Detected lcore 127 as core 19 on socket 1 00:03:32.669 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:32.669 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:32.669 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:32.669 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:32.669 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:32.670 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:32.670 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:32.670 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:32.670 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:32.670 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:32.670 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:32.670 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:32.670 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:32.670 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:32.670 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:32.670 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:32.670 EAL: Maximum logical cores by configuration: 128 00:03:32.670 EAL: Detected CPU lcores: 128 00:03:32.670 EAL: Detected NUMA nodes: 2 00:03:32.670 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:32.670 EAL: Detected shared linkage of DPDK 00:03:32.671 EAL: No shared files mode enabled, IPC will be disabled 00:03:32.671 EAL: Bus pci wants IOVA as 'DC' 00:03:32.671 EAL: Buses did not request a specific IOVA mode. 00:03:32.671 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:32.671 EAL: Selected IOVA mode 'VA' 00:03:32.671 EAL: Probing VFIO support... 00:03:32.671 EAL: IOMMU type 1 (Type 1) is supported 00:03:32.671 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:32.671 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:32.671 EAL: VFIO support initialized 00:03:32.671 EAL: Ask a virtual area of 0x2e000 bytes 00:03:32.672 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:32.672 EAL: Setting up physically contiguous memory... 00:03:32.672 EAL: Setting maximum number of open files to 524288 00:03:32.672 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:32.672 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:32.672 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:32.672 EAL: Ask a virtual area of 0x61000 bytes 00:03:32.672 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:32.673 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:32.673 EAL: Ask a virtual area of 0x400000000 bytes 00:03:32.673 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:32.673 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:32.673 EAL: Ask a virtual area of 0x61000 bytes 00:03:32.673 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:32.673 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:32.673 EAL: Ask a virtual area of 0x400000000 bytes 00:03:32.673 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:32.673 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:32.673 EAL: Ask a virtual area of 0x61000 bytes 00:03:32.673 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:32.673 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:32.673 EAL: Ask a virtual area of 0x400000000 bytes 00:03:32.673 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:32.673 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:32.673 EAL: Ask a virtual area of 0x61000 bytes 00:03:32.673 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:32.673 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:32.673 EAL: Ask a virtual area of 0x400000000 bytes 00:03:32.673 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:32.673 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:32.673 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:32.673 EAL: Ask a virtual area of 0x61000 bytes 00:03:32.673 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:32.673 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:32.673 EAL: Ask a virtual area of 0x400000000 bytes 00:03:32.673 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:32.673 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:32.673 EAL: Ask a virtual area of 0x61000 bytes 00:03:32.673 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:32.673 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:32.673 EAL: Ask a virtual area of 0x400000000 bytes 00:03:32.673 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:32.673 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:32.673 EAL: Ask a virtual area of 0x61000 bytes 00:03:32.673 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:32.673 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:32.673 EAL: Ask a virtual area of 0x400000000 bytes 00:03:32.673 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:32.673 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:32.673 EAL: Ask a virtual area of 0x61000 bytes 00:03:32.673 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:32.673 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:32.673 EAL: Ask a virtual area of 0x400000000 bytes 00:03:32.673 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:32.673 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:32.673 EAL: Hugepages will be freed exactly as allocated. 00:03:32.673 EAL: No shared files mode enabled, IPC is disabled 00:03:32.673 EAL: No shared files mode enabled, IPC is disabled 00:03:32.673 EAL: TSC frequency is ~2400000 KHz 00:03:32.673 EAL: Main lcore 0 is ready (tid=7ff3f0979a00;cpuset=[0]) 00:03:32.673 EAL: Trying to obtain current memory policy. 00:03:32.673 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:32.673 EAL: Restoring previous memory policy: 0 00:03:32.673 EAL: request: mp_malloc_sync 00:03:32.673 EAL: No shared files mode enabled, IPC is disabled 00:03:32.673 EAL: Heap on socket 0 was expanded by 2MB 00:03:32.673 EAL: No shared files mode enabled, IPC is disabled 00:03:32.673 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:32.673 EAL: Mem event callback 'spdk:(nil)' registered 00:03:32.673 00:03:32.673 00:03:32.673 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.673 http://cunit.sourceforge.net/ 00:03:32.673 00:03:32.673 00:03:32.673 Suite: components_suite 00:03:32.673 Test: vtophys_malloc_test ...passed 00:03:32.673 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:32.673 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:32.673 EAL: Restoring previous memory policy: 4 00:03:32.673 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.673 EAL: request: mp_malloc_sync 00:03:32.673 EAL: No shared files mode enabled, IPC is disabled 00:03:32.673 EAL: Heap on socket 0 was expanded by 4MB 00:03:32.673 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.673 EAL: request: mp_malloc_sync 00:03:32.673 EAL: No shared files mode enabled, IPC is disabled 00:03:32.673 EAL: Heap on socket 0 was shrunk by 4MB 00:03:32.673 EAL: Trying to obtain current memory policy. 00:03:32.673 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:32.673 EAL: Restoring previous memory policy: 4 00:03:32.673 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.673 EAL: request: mp_malloc_sync 00:03:32.673 EAL: No shared files mode enabled, IPC is disabled 00:03:32.673 EAL: Heap on socket 0 was expanded by 6MB 00:03:32.673 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.674 EAL: request: mp_malloc_sync 00:03:32.674 EAL: No shared files mode enabled, IPC is disabled 00:03:32.674 EAL: Heap on socket 0 was shrunk by 6MB 00:03:32.674 EAL: Trying to obtain current memory policy. 00:03:32.674 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:32.674 EAL: Restoring previous memory policy: 4 00:03:32.674 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.674 EAL: request: mp_malloc_sync 00:03:32.674 EAL: No shared files mode enabled, IPC is disabled 00:03:32.674 EAL: Heap on socket 0 was expanded by 10MB 00:03:32.674 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.674 EAL: request: mp_malloc_sync 00:03:32.674 EAL: No shared files mode enabled, IPC is disabled 00:03:32.674 EAL: Heap on socket 0 was shrunk by 10MB 00:03:32.674 EAL: Trying to obtain current memory policy. 00:03:32.674 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:32.674 EAL: Restoring previous memory policy: 4 00:03:32.674 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.674 EAL: request: mp_malloc_sync 00:03:32.674 EAL: No shared files mode enabled, IPC is disabled 00:03:32.674 EAL: Heap on socket 0 was expanded by 18MB 00:03:32.674 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.674 EAL: request: mp_malloc_sync 00:03:32.674 EAL: No shared files mode enabled, IPC is disabled 00:03:32.674 EAL: Heap on socket 0 was shrunk by 18MB 00:03:32.674 EAL: Trying to obtain current memory policy. 00:03:32.674 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:32.674 EAL: Restoring previous memory policy: 4 00:03:32.674 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.674 EAL: request: mp_malloc_sync 00:03:32.674 EAL: No shared files mode enabled, IPC is disabled 00:03:32.674 EAL: Heap on socket 0 was expanded by 34MB 00:03:32.674 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.674 EAL: request: mp_malloc_sync 00:03:32.674 EAL: No shared files mode enabled, IPC is disabled 00:03:32.674 EAL: Heap on socket 0 was shrunk by 34MB 00:03:32.674 EAL: Trying to obtain current memory policy. 00:03:32.674 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:32.674 EAL: Restoring previous memory policy: 4 00:03:32.674 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.674 EAL: request: mp_malloc_sync 00:03:32.674 EAL: No shared files mode enabled, IPC is disabled 00:03:32.674 EAL: Heap on socket 0 was expanded by 66MB 00:03:32.674 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.674 EAL: request: mp_malloc_sync 00:03:32.674 EAL: No shared files mode enabled, IPC is disabled 00:03:32.674 EAL: Heap on socket 0 was shrunk by 66MB 00:03:32.674 EAL: Trying to obtain current memory policy. 00:03:32.674 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:32.674 EAL: Restoring previous memory policy: 4 00:03:32.674 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.674 EAL: request: mp_malloc_sync 00:03:32.674 EAL: No shared files mode enabled, IPC is disabled 00:03:32.674 EAL: Heap on socket 0 was expanded by 130MB 00:03:32.674 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.674 EAL: request: mp_malloc_sync 00:03:32.674 EAL: No shared files mode enabled, IPC is disabled 00:03:32.674 EAL: Heap on socket 0 was shrunk by 130MB 00:03:32.674 EAL: Trying to obtain current memory policy. 00:03:32.674 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:32.674 EAL: Restoring previous memory policy: 4 00:03:32.674 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.674 EAL: request: mp_malloc_sync 00:03:32.674 EAL: No shared files mode enabled, IPC is disabled 00:03:32.674 EAL: Heap on socket 0 was expanded by 258MB 00:03:32.933 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.933 EAL: request: mp_malloc_sync 00:03:32.933 EAL: No shared files mode enabled, IPC is disabled 00:03:32.933 EAL: Heap on socket 0 was shrunk by 258MB 00:03:32.933 EAL: Trying to obtain current memory policy. 00:03:32.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:32.933 EAL: Restoring previous memory policy: 4 00:03:32.933 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.933 EAL: request: mp_malloc_sync 00:03:32.933 EAL: No shared files mode enabled, IPC is disabled 00:03:32.933 EAL: Heap on socket 0 was expanded by 514MB 00:03:32.933 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.933 EAL: request: mp_malloc_sync 00:03:32.933 EAL: No shared files mode enabled, IPC is disabled 00:03:32.933 EAL: Heap on socket 0 was shrunk by 514MB 00:03:32.933 EAL: Trying to obtain current memory policy. 00:03:32.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.194 EAL: Restoring previous memory policy: 4 00:03:33.195 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.195 EAL: request: mp_malloc_sync 00:03:33.195 EAL: No shared files mode enabled, IPC is disabled 00:03:33.195 EAL: Heap on socket 0 was expanded by 1026MB 00:03:33.195 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.455 EAL: request: mp_malloc_sync 00:03:33.455 EAL: No shared files mode enabled, IPC is disabled 00:03:33.455 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:33.455 passed 00:03:33.455 00:03:33.455 Run Summary: Type Total Ran Passed Failed Inactive 00:03:33.455 suites 1 1 n/a 0 0 00:03:33.455 tests 2 2 2 0 0 00:03:33.455 asserts 497 497 497 0 n/a 00:03:33.455 00:03:33.455 Elapsed time = 0.687 seconds 00:03:33.455 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.455 EAL: request: mp_malloc_sync 00:03:33.455 EAL: No shared files mode enabled, IPC is disabled 00:03:33.455 EAL: Heap on socket 0 was shrunk by 2MB 00:03:33.455 EAL: No shared files mode enabled, IPC is disabled 00:03:33.455 EAL: No shared files mode enabled, IPC is disabled 00:03:33.455 EAL: No shared files mode enabled, IPC is disabled 00:03:33.455 00:03:33.455 real 0m0.835s 00:03:33.455 user 0m0.436s 00:03:33.455 sys 0m0.373s 00:03:33.455 15:15:51 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:33.455 15:15:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:33.455 ************************************ 00:03:33.455 END TEST env_vtophys 00:03:33.455 ************************************ 00:03:33.455 15:15:51 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:33.455 15:15:51 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:33.455 15:15:51 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:33.455 15:15:51 env -- common/autotest_common.sh@10 -- # set +x 00:03:33.455 ************************************ 00:03:33.455 START TEST env_pci 00:03:33.455 ************************************ 00:03:33.455 15:15:51 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:33.455 00:03:33.455 00:03:33.455 CUnit - A unit testing framework for C - Version 2.1-3 00:03:33.455 http://cunit.sourceforge.net/ 00:03:33.455 00:03:33.455 00:03:33.455 Suite: pci 00:03:33.455 Test: pci_hook ...[2024-11-06 15:15:51.307204] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3531044 has claimed it 00:03:33.455 EAL: Cannot find device (10000:00:01.0) 00:03:33.455 EAL: Failed to attach device on primary process 00:03:33.455 passed 00:03:33.455 00:03:33.455 Run Summary: Type Total Ran Passed Failed Inactive 00:03:33.455 suites 1 1 n/a 0 0 00:03:33.455 tests 1 1 1 0 0 00:03:33.455 asserts 25 25 25 0 n/a 00:03:33.455 00:03:33.455 Elapsed time = 0.031 seconds 00:03:33.455 00:03:33.455 real 0m0.052s 00:03:33.455 user 0m0.022s 00:03:33.455 sys 0m0.029s 00:03:33.455 15:15:51 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:33.455 15:15:51 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:33.455 ************************************ 00:03:33.455 END TEST env_pci 00:03:33.455 ************************************ 00:03:33.455 15:15:51 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:33.455 15:15:51 env -- env/env.sh@15 -- # uname 00:03:33.455 15:15:51 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:33.455 15:15:51 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:33.455 15:15:51 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:33.455 15:15:51 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:03:33.455 15:15:51 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:33.455 15:15:51 env -- common/autotest_common.sh@10 -- # set +x 00:03:33.455 ************************************ 00:03:33.455 START TEST env_dpdk_post_init 00:03:33.455 ************************************ 00:03:33.455 15:15:51 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:33.716 EAL: Detected CPU lcores: 128 00:03:33.716 EAL: Detected NUMA nodes: 2 00:03:33.716 EAL: Detected shared linkage of DPDK 00:03:33.716 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:33.716 EAL: Selected IOVA mode 'VA' 00:03:33.716 EAL: VFIO support initialized 00:03:33.716 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:33.716 EAL: Using IOMMU type 1 (Type 1) 00:03:33.975 EAL: Ignore mapping IO port bar(1) 00:03:33.975 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:33.975 EAL: Ignore mapping IO port bar(1) 00:03:34.236 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:34.236 EAL: Ignore mapping IO port bar(1) 00:03:34.495 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:34.495 EAL: Ignore mapping IO port bar(1) 00:03:34.495 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:34.755 EAL: Ignore mapping IO port bar(1) 00:03:34.755 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:35.016 EAL: Ignore mapping IO port bar(1) 00:03:35.016 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:35.275 EAL: Ignore mapping IO port bar(1) 00:03:35.275 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:35.536 EAL: Ignore mapping IO port bar(1) 00:03:35.536 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:35.796 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:35.796 EAL: Ignore mapping IO port bar(1) 00:03:36.056 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:36.056 EAL: Ignore mapping IO port bar(1) 00:03:36.317 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:36.317 EAL: Ignore mapping IO port bar(1) 00:03:36.317 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:36.577 EAL: Ignore mapping IO port bar(1) 00:03:36.577 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:36.837 EAL: Ignore mapping IO port bar(1) 00:03:36.837 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:37.097 EAL: Ignore mapping IO port bar(1) 00:03:37.097 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:37.097 EAL: Ignore mapping IO port bar(1) 00:03:37.358 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:37.358 EAL: Ignore mapping IO port bar(1) 00:03:37.618 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:37.618 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:37.618 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:37.618 Starting DPDK initialization... 00:03:37.618 Starting SPDK post initialization... 00:03:37.618 SPDK NVMe probe 00:03:37.618 Attaching to 0000:65:00.0 00:03:37.618 Attached to 0000:65:00.0 00:03:37.618 Cleaning up... 00:03:39.530 00:03:39.530 real 0m5.750s 00:03:39.530 user 0m0.114s 00:03:39.530 sys 0m0.193s 00:03:39.530 15:15:57 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:39.530 15:15:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:39.530 ************************************ 00:03:39.530 END TEST env_dpdk_post_init 00:03:39.530 ************************************ 00:03:39.530 15:15:57 env -- env/env.sh@26 -- # uname 00:03:39.530 15:15:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:39.530 15:15:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:39.530 15:15:57 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:39.530 15:15:57 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:39.530 15:15:57 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.530 ************************************ 00:03:39.530 START TEST env_mem_callbacks 00:03:39.530 ************************************ 00:03:39.530 15:15:57 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:39.530 EAL: Detected CPU lcores: 128 00:03:39.530 EAL: Detected NUMA nodes: 2 00:03:39.530 EAL: Detected shared linkage of DPDK 00:03:39.530 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:39.530 EAL: Selected IOVA mode 'VA' 00:03:39.530 EAL: VFIO support initialized 00:03:39.530 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:39.530 00:03:39.530 00:03:39.530 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.530 http://cunit.sourceforge.net/ 00:03:39.530 00:03:39.530 00:03:39.530 Suite: memory 00:03:39.530 Test: test ... 00:03:39.530 register 0x200000200000 2097152 00:03:39.530 malloc 3145728 00:03:39.531 register 0x200000400000 4194304 00:03:39.531 buf 0x200000500000 len 3145728 PASSED 00:03:39.531 malloc 64 00:03:39.531 buf 0x2000004fff40 len 64 PASSED 00:03:39.531 malloc 4194304 00:03:39.531 register 0x200000800000 6291456 00:03:39.531 buf 0x200000a00000 len 4194304 PASSED 00:03:39.531 free 0x200000500000 3145728 00:03:39.531 free 0x2000004fff40 64 00:03:39.531 unregister 0x200000400000 4194304 PASSED 00:03:39.531 free 0x200000a00000 4194304 00:03:39.531 unregister 0x200000800000 6291456 PASSED 00:03:39.531 malloc 8388608 00:03:39.531 register 0x200000400000 10485760 00:03:39.531 buf 0x200000600000 len 8388608 PASSED 00:03:39.531 free 0x200000600000 8388608 00:03:39.531 unregister 0x200000400000 10485760 PASSED 00:03:39.531 passed 00:03:39.531 00:03:39.531 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.531 suites 1 1 n/a 0 0 00:03:39.531 tests 1 1 1 0 0 00:03:39.531 asserts 15 15 15 0 n/a 00:03:39.531 00:03:39.531 Elapsed time = 0.010 seconds 00:03:39.531 00:03:39.531 real 0m0.071s 00:03:39.531 user 0m0.025s 00:03:39.531 sys 0m0.046s 00:03:39.531 15:15:57 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:39.531 15:15:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:39.531 ************************************ 00:03:39.531 END TEST env_mem_callbacks 00:03:39.531 ************************************ 00:03:39.531 00:03:39.531 real 0m7.539s 00:03:39.531 user 0m1.051s 00:03:39.531 sys 0m1.051s 00:03:39.531 15:15:57 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:39.531 15:15:57 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.531 ************************************ 00:03:39.531 END TEST env 00:03:39.531 ************************************ 00:03:39.531 15:15:57 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:39.531 15:15:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:39.531 15:15:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:39.531 15:15:57 -- common/autotest_common.sh@10 -- # set +x 00:03:39.531 ************************************ 00:03:39.531 START TEST rpc 00:03:39.531 ************************************ 00:03:39.531 15:15:57 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:39.792 * Looking for test storage... 00:03:39.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:39.792 15:15:57 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:39.792 15:15:57 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:39.792 15:15:57 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:39.792 15:15:57 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:39.792 15:15:57 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:39.792 15:15:57 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:39.792 15:15:57 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:39.792 15:15:57 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.792 15:15:57 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:39.792 15:15:57 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:39.792 15:15:57 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:39.792 15:15:57 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:39.792 15:15:57 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:39.792 15:15:57 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:39.792 15:15:57 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:39.792 15:15:57 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:39.792 15:15:57 rpc -- scripts/common.sh@345 -- # : 1 00:03:39.792 15:15:57 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:39.792 15:15:57 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.792 15:15:57 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:39.792 15:15:57 rpc -- scripts/common.sh@353 -- # local d=1 00:03:39.792 15:15:57 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.792 15:15:57 rpc -- scripts/common.sh@355 -- # echo 1 00:03:39.792 15:15:57 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:39.792 15:15:57 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:39.792 15:15:57 rpc -- scripts/common.sh@353 -- # local d=2 00:03:39.792 15:15:57 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.792 15:15:57 rpc -- scripts/common.sh@355 -- # echo 2 00:03:39.792 15:15:57 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:39.792 15:15:57 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:39.792 15:15:57 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:39.792 15:15:57 rpc -- scripts/common.sh@368 -- # return 0 00:03:39.792 15:15:57 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:39.792 15:15:57 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:39.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.792 --rc genhtml_branch_coverage=1 00:03:39.792 --rc genhtml_function_coverage=1 00:03:39.792 --rc genhtml_legend=1 00:03:39.792 --rc geninfo_all_blocks=1 00:03:39.792 --rc geninfo_unexecuted_blocks=1 00:03:39.792 00:03:39.792 ' 00:03:39.792 15:15:57 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:39.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.792 --rc genhtml_branch_coverage=1 00:03:39.792 --rc genhtml_function_coverage=1 00:03:39.792 --rc genhtml_legend=1 00:03:39.792 --rc geninfo_all_blocks=1 00:03:39.792 --rc geninfo_unexecuted_blocks=1 00:03:39.792 00:03:39.792 ' 00:03:39.792 15:15:57 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:39.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.792 --rc genhtml_branch_coverage=1 00:03:39.792 --rc genhtml_function_coverage=1 00:03:39.792 --rc genhtml_legend=1 00:03:39.792 --rc geninfo_all_blocks=1 00:03:39.792 --rc geninfo_unexecuted_blocks=1 00:03:39.792 00:03:39.792 ' 00:03:39.792 15:15:57 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:39.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.792 --rc genhtml_branch_coverage=1 00:03:39.792 --rc genhtml_function_coverage=1 00:03:39.792 --rc genhtml_legend=1 00:03:39.792 --rc geninfo_all_blocks=1 00:03:39.792 --rc geninfo_unexecuted_blocks=1 00:03:39.792 00:03:39.792 ' 00:03:39.792 15:15:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3532380 00:03:39.792 15:15:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:39.792 15:15:57 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:39.792 15:15:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3532380 00:03:39.792 15:15:57 rpc -- common/autotest_common.sh@833 -- # '[' -z 3532380 ']' 00:03:39.792 15:15:57 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:39.792 15:15:57 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:39.792 15:15:57 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:39.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:39.792 15:15:57 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:39.792 15:15:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.792 [2024-11-06 15:15:57.733743] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:03:39.792 [2024-11-06 15:15:57.733814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3532380 ] 00:03:40.053 [2024-11-06 15:15:57.827153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:40.053 [2024-11-06 15:15:57.878721] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:40.053 [2024-11-06 15:15:57.878782] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3532380' to capture a snapshot of events at runtime. 00:03:40.053 [2024-11-06 15:15:57.878791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:40.053 [2024-11-06 15:15:57.878798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:40.053 [2024-11-06 15:15:57.878804] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3532380 for offline analysis/debug. 00:03:40.053 [2024-11-06 15:15:57.879599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:40.626 15:15:58 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:40.626 15:15:58 rpc -- common/autotest_common.sh@866 -- # return 0 00:03:40.626 15:15:58 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:40.626 15:15:58 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:40.626 15:15:58 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:40.626 15:15:58 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:40.626 15:15:58 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:40.626 15:15:58 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:40.626 15:15:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.626 ************************************ 00:03:40.626 START TEST rpc_integrity 00:03:40.626 ************************************ 00:03:40.626 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:40.626 15:15:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:40.626 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.626 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.626 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.626 15:15:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:40.626 15:15:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:40.887 15:15:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:40.887 15:15:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:40.887 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.888 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.888 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.888 15:15:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:40.888 15:15:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:40.888 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.888 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.888 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.888 15:15:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:40.888 { 00:03:40.888 "name": "Malloc0", 00:03:40.888 "aliases": [ 00:03:40.888 "6144d961-12b5-49ef-b7c8-53aa3e27e20a" 00:03:40.888 ], 00:03:40.888 "product_name": "Malloc disk", 00:03:40.888 "block_size": 512, 00:03:40.888 "num_blocks": 16384, 00:03:40.888 "uuid": "6144d961-12b5-49ef-b7c8-53aa3e27e20a", 00:03:40.888 "assigned_rate_limits": { 00:03:40.888 "rw_ios_per_sec": 0, 00:03:40.888 "rw_mbytes_per_sec": 0, 00:03:40.888 "r_mbytes_per_sec": 0, 00:03:40.888 "w_mbytes_per_sec": 0 00:03:40.888 }, 00:03:40.888 "claimed": false, 00:03:40.888 "zoned": false, 00:03:40.888 "supported_io_types": { 00:03:40.888 "read": true, 00:03:40.888 "write": true, 00:03:40.888 "unmap": true, 00:03:40.888 "flush": true, 00:03:40.888 "reset": true, 00:03:40.888 "nvme_admin": false, 00:03:40.888 "nvme_io": false, 00:03:40.888 "nvme_io_md": false, 00:03:40.888 "write_zeroes": true, 00:03:40.888 "zcopy": true, 00:03:40.888 "get_zone_info": false, 00:03:40.888 "zone_management": false, 00:03:40.888 "zone_append": false, 00:03:40.888 "compare": false, 00:03:40.888 "compare_and_write": false, 00:03:40.888 "abort": true, 00:03:40.888 "seek_hole": false, 00:03:40.888 "seek_data": false, 00:03:40.888 "copy": true, 00:03:40.888 "nvme_iov_md": false 00:03:40.888 }, 00:03:40.888 "memory_domains": [ 00:03:40.888 { 00:03:40.888 "dma_device_id": "system", 00:03:40.888 "dma_device_type": 1 00:03:40.888 }, 00:03:40.888 { 00:03:40.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:40.888 "dma_device_type": 2 00:03:40.888 } 00:03:40.888 ], 00:03:40.888 "driver_specific": {} 00:03:40.888 } 00:03:40.888 ]' 00:03:40.888 15:15:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:40.888 15:15:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:40.888 15:15:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:40.888 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.888 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.888 [2024-11-06 15:15:58.713900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:40.888 [2024-11-06 15:15:58.713945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:40.888 [2024-11-06 15:15:58.713962] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf76800 00:03:40.888 [2024-11-06 15:15:58.713970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:40.888 [2024-11-06 15:15:58.715535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:40.888 [2024-11-06 15:15:58.715572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:40.888 Passthru0 00:03:40.888 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.888 15:15:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:40.888 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.888 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.888 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.888 15:15:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:40.888 { 00:03:40.888 "name": "Malloc0", 00:03:40.888 "aliases": [ 00:03:40.888 "6144d961-12b5-49ef-b7c8-53aa3e27e20a" 00:03:40.888 ], 00:03:40.888 "product_name": "Malloc disk", 00:03:40.888 "block_size": 512, 00:03:40.888 "num_blocks": 16384, 00:03:40.888 "uuid": "6144d961-12b5-49ef-b7c8-53aa3e27e20a", 00:03:40.888 "assigned_rate_limits": { 00:03:40.888 "rw_ios_per_sec": 0, 00:03:40.888 "rw_mbytes_per_sec": 0, 00:03:40.888 "r_mbytes_per_sec": 0, 00:03:40.888 "w_mbytes_per_sec": 0 00:03:40.888 }, 00:03:40.888 "claimed": true, 00:03:40.888 "claim_type": "exclusive_write", 00:03:40.888 "zoned": false, 00:03:40.888 "supported_io_types": { 00:03:40.888 "read": true, 00:03:40.888 "write": true, 00:03:40.888 "unmap": true, 00:03:40.888 "flush": true, 00:03:40.888 "reset": true, 00:03:40.888 "nvme_admin": false, 00:03:40.888 "nvme_io": false, 00:03:40.888 "nvme_io_md": false, 00:03:40.888 "write_zeroes": true, 00:03:40.888 "zcopy": true, 00:03:40.888 "get_zone_info": false, 00:03:40.888 "zone_management": false, 00:03:40.888 "zone_append": false, 00:03:40.888 "compare": false, 00:03:40.888 "compare_and_write": false, 00:03:40.888 "abort": true, 00:03:40.888 "seek_hole": false, 00:03:40.888 "seek_data": false, 00:03:40.888 "copy": true, 00:03:40.888 "nvme_iov_md": false 00:03:40.888 }, 00:03:40.888 "memory_domains": [ 00:03:40.888 { 00:03:40.888 "dma_device_id": "system", 00:03:40.888 "dma_device_type": 1 00:03:40.888 }, 00:03:40.888 { 00:03:40.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:40.888 "dma_device_type": 2 00:03:40.888 } 00:03:40.888 ], 00:03:40.888 "driver_specific": {} 00:03:40.888 }, 00:03:40.888 { 00:03:40.888 "name": "Passthru0", 00:03:40.888 "aliases": [ 00:03:40.888 "09217cc8-61f0-5b3d-8d27-947c94390775" 00:03:40.888 ], 00:03:40.888 "product_name": "passthru", 00:03:40.888 "block_size": 512, 00:03:40.888 "num_blocks": 16384, 00:03:40.888 "uuid": "09217cc8-61f0-5b3d-8d27-947c94390775", 00:03:40.888 "assigned_rate_limits": { 00:03:40.888 "rw_ios_per_sec": 0, 00:03:40.888 "rw_mbytes_per_sec": 0, 00:03:40.888 "r_mbytes_per_sec": 0, 00:03:40.888 "w_mbytes_per_sec": 0 00:03:40.888 }, 00:03:40.888 "claimed": false, 00:03:40.888 "zoned": false, 00:03:40.888 "supported_io_types": { 00:03:40.888 "read": true, 00:03:40.888 "write": true, 00:03:40.888 "unmap": true, 00:03:40.888 "flush": true, 00:03:40.888 "reset": true, 00:03:40.888 "nvme_admin": false, 00:03:40.888 "nvme_io": false, 00:03:40.888 "nvme_io_md": false, 00:03:40.888 "write_zeroes": true, 00:03:40.888 "zcopy": true, 00:03:40.888 "get_zone_info": false, 00:03:40.888 "zone_management": false, 00:03:40.888 "zone_append": false, 00:03:40.888 "compare": false, 00:03:40.888 "compare_and_write": false, 00:03:40.888 "abort": true, 00:03:40.888 "seek_hole": false, 00:03:40.888 "seek_data": false, 00:03:40.888 "copy": true, 00:03:40.888 "nvme_iov_md": false 00:03:40.888 }, 00:03:40.888 "memory_domains": [ 00:03:40.888 { 00:03:40.888 "dma_device_id": "system", 00:03:40.888 "dma_device_type": 1 00:03:40.888 }, 00:03:40.888 { 00:03:40.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:40.888 "dma_device_type": 2 00:03:40.888 } 00:03:40.888 ], 00:03:40.888 "driver_specific": { 00:03:40.888 "passthru": { 00:03:40.888 "name": "Passthru0", 00:03:40.888 "base_bdev_name": "Malloc0" 00:03:40.888 } 00:03:40.888 } 00:03:40.888 } 00:03:40.888 ]' 00:03:40.888 15:15:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:40.888 15:15:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:40.888 15:15:58 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:40.888 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.888 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.888 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.888 15:15:58 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:40.888 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.888 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.888 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.888 15:15:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:40.888 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:40.888 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.888 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:40.888 15:15:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:40.888 15:15:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:41.150 15:15:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:41.150 00:03:41.150 real 0m0.304s 00:03:41.150 user 0m0.183s 00:03:41.150 sys 0m0.053s 00:03:41.150 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:41.150 15:15:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.150 ************************************ 00:03:41.150 END TEST rpc_integrity 00:03:41.150 ************************************ 00:03:41.150 15:15:58 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:41.150 15:15:58 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:41.150 15:15:58 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:41.150 15:15:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.150 ************************************ 00:03:41.150 START TEST rpc_plugins 00:03:41.150 ************************************ 00:03:41.150 15:15:58 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:03:41.150 15:15:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:41.150 15:15:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:41.150 15:15:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.150 15:15:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:41.150 15:15:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:41.150 15:15:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:41.150 15:15:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:41.150 15:15:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.150 15:15:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:41.150 15:15:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:41.150 { 00:03:41.150 "name": "Malloc1", 00:03:41.150 "aliases": [ 00:03:41.150 "242a4d70-c62f-46a8-9d26-e6d49bab8572" 00:03:41.150 ], 00:03:41.150 "product_name": "Malloc disk", 00:03:41.150 "block_size": 4096, 00:03:41.150 "num_blocks": 256, 00:03:41.150 "uuid": "242a4d70-c62f-46a8-9d26-e6d49bab8572", 00:03:41.150 "assigned_rate_limits": { 00:03:41.150 "rw_ios_per_sec": 0, 00:03:41.150 "rw_mbytes_per_sec": 0, 00:03:41.150 "r_mbytes_per_sec": 0, 00:03:41.150 "w_mbytes_per_sec": 0 00:03:41.150 }, 00:03:41.150 "claimed": false, 00:03:41.150 "zoned": false, 00:03:41.150 "supported_io_types": { 00:03:41.150 "read": true, 00:03:41.150 "write": true, 00:03:41.150 "unmap": true, 00:03:41.150 "flush": true, 00:03:41.150 "reset": true, 00:03:41.150 "nvme_admin": false, 00:03:41.150 "nvme_io": false, 00:03:41.150 "nvme_io_md": false, 00:03:41.150 "write_zeroes": true, 00:03:41.150 "zcopy": true, 00:03:41.150 "get_zone_info": false, 00:03:41.150 "zone_management": false, 00:03:41.150 "zone_append": false, 00:03:41.150 "compare": false, 00:03:41.150 "compare_and_write": false, 00:03:41.150 "abort": true, 00:03:41.150 "seek_hole": false, 00:03:41.150 "seek_data": false, 00:03:41.150 "copy": true, 00:03:41.150 "nvme_iov_md": false 00:03:41.150 }, 00:03:41.150 "memory_domains": [ 00:03:41.150 { 00:03:41.150 "dma_device_id": "system", 00:03:41.150 "dma_device_type": 1 00:03:41.150 }, 00:03:41.150 { 00:03:41.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.150 "dma_device_type": 2 00:03:41.150 } 00:03:41.150 ], 00:03:41.150 "driver_specific": {} 00:03:41.150 } 00:03:41.150 ]' 00:03:41.150 15:15:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:41.150 15:15:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:41.150 15:15:59 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:41.150 15:15:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:41.150 15:15:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.150 15:15:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:41.150 15:15:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:41.150 15:15:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:41.150 15:15:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.150 15:15:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:41.150 15:15:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:41.150 15:15:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:41.150 15:15:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:41.150 00:03:41.150 real 0m0.156s 00:03:41.150 user 0m0.094s 00:03:41.150 sys 0m0.026s 00:03:41.150 15:15:59 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:41.150 15:15:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.150 ************************************ 00:03:41.150 END TEST rpc_plugins 00:03:41.150 ************************************ 00:03:41.409 15:15:59 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:41.409 15:15:59 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:41.409 15:15:59 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:41.409 15:15:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.409 ************************************ 00:03:41.409 START TEST rpc_trace_cmd_test 00:03:41.409 ************************************ 00:03:41.409 15:15:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:03:41.409 15:15:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:41.409 15:15:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:41.410 15:15:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:41.410 15:15:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:41.410 15:15:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:41.410 15:15:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:41.410 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3532380", 00:03:41.410 "tpoint_group_mask": "0x8", 00:03:41.410 "iscsi_conn": { 00:03:41.410 "mask": "0x2", 00:03:41.410 "tpoint_mask": "0x0" 00:03:41.410 }, 00:03:41.410 "scsi": { 00:03:41.410 "mask": "0x4", 00:03:41.410 "tpoint_mask": "0x0" 00:03:41.410 }, 00:03:41.410 "bdev": { 00:03:41.410 "mask": "0x8", 00:03:41.410 "tpoint_mask": "0xffffffffffffffff" 00:03:41.410 }, 00:03:41.410 "nvmf_rdma": { 00:03:41.410 "mask": "0x10", 00:03:41.410 "tpoint_mask": "0x0" 00:03:41.410 }, 00:03:41.410 "nvmf_tcp": { 00:03:41.410 "mask": "0x20", 00:03:41.410 "tpoint_mask": "0x0" 00:03:41.410 }, 00:03:41.410 "ftl": { 00:03:41.410 "mask": "0x40", 00:03:41.410 "tpoint_mask": "0x0" 00:03:41.410 }, 00:03:41.410 "blobfs": { 00:03:41.410 "mask": "0x80", 00:03:41.410 "tpoint_mask": "0x0" 00:03:41.410 }, 00:03:41.410 "dsa": { 00:03:41.410 "mask": "0x200", 00:03:41.410 "tpoint_mask": "0x0" 00:03:41.410 }, 00:03:41.410 "thread": { 00:03:41.410 "mask": "0x400", 00:03:41.410 "tpoint_mask": "0x0" 00:03:41.410 }, 00:03:41.410 "nvme_pcie": { 00:03:41.410 "mask": "0x800", 00:03:41.410 "tpoint_mask": "0x0" 00:03:41.410 }, 00:03:41.410 "iaa": { 00:03:41.410 "mask": "0x1000", 00:03:41.410 "tpoint_mask": "0x0" 00:03:41.410 }, 00:03:41.410 "nvme_tcp": { 00:03:41.410 "mask": "0x2000", 00:03:41.410 "tpoint_mask": "0x0" 00:03:41.410 }, 00:03:41.410 "bdev_nvme": { 00:03:41.410 "mask": "0x4000", 00:03:41.410 "tpoint_mask": "0x0" 00:03:41.410 }, 00:03:41.410 "sock": { 00:03:41.410 "mask": "0x8000", 00:03:41.410 "tpoint_mask": "0x0" 00:03:41.410 }, 00:03:41.410 "blob": { 00:03:41.410 "mask": "0x10000", 00:03:41.410 "tpoint_mask": "0x0" 00:03:41.410 }, 00:03:41.410 "bdev_raid": { 00:03:41.410 "mask": "0x20000", 00:03:41.410 "tpoint_mask": "0x0" 00:03:41.410 }, 00:03:41.410 "scheduler": { 00:03:41.410 "mask": "0x40000", 00:03:41.410 "tpoint_mask": "0x0" 00:03:41.410 } 00:03:41.410 }' 00:03:41.410 15:15:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:41.410 15:15:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:41.410 15:15:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:41.410 15:15:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:41.410 15:15:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:41.410 15:15:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:41.410 15:15:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:41.670 15:15:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:41.670 15:15:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:41.670 15:15:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:41.670 00:03:41.670 real 0m0.253s 00:03:41.670 user 0m0.206s 00:03:41.670 sys 0m0.039s 00:03:41.670 15:15:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:41.670 15:15:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:41.670 ************************************ 00:03:41.670 END TEST rpc_trace_cmd_test 00:03:41.670 ************************************ 00:03:41.670 15:15:59 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:41.670 15:15:59 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:41.670 15:15:59 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:41.670 15:15:59 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:41.670 15:15:59 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:41.670 15:15:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.670 ************************************ 00:03:41.670 START TEST rpc_daemon_integrity 00:03:41.670 ************************************ 00:03:41.670 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:41.670 15:15:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:41.670 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:41.670 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.670 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:41.670 15:15:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:41.670 15:15:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:41.670 15:15:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:41.670 15:15:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:41.670 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:41.671 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.671 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:41.671 15:15:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:41.671 15:15:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:41.671 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:41.671 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.671 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:41.671 15:15:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:41.671 { 00:03:41.671 "name": "Malloc2", 00:03:41.671 "aliases": [ 00:03:41.671 "85b30c25-d4d9-4c3f-95fb-aa80728eac9b" 00:03:41.671 ], 00:03:41.671 "product_name": "Malloc disk", 00:03:41.671 "block_size": 512, 00:03:41.671 "num_blocks": 16384, 00:03:41.671 "uuid": "85b30c25-d4d9-4c3f-95fb-aa80728eac9b", 00:03:41.671 "assigned_rate_limits": { 00:03:41.671 "rw_ios_per_sec": 0, 00:03:41.671 "rw_mbytes_per_sec": 0, 00:03:41.671 "r_mbytes_per_sec": 0, 00:03:41.671 "w_mbytes_per_sec": 0 00:03:41.671 }, 00:03:41.671 "claimed": false, 00:03:41.671 "zoned": false, 00:03:41.671 "supported_io_types": { 00:03:41.671 "read": true, 00:03:41.671 "write": true, 00:03:41.671 "unmap": true, 00:03:41.671 "flush": true, 00:03:41.671 "reset": true, 00:03:41.671 "nvme_admin": false, 00:03:41.671 "nvme_io": false, 00:03:41.671 "nvme_io_md": false, 00:03:41.671 "write_zeroes": true, 00:03:41.671 "zcopy": true, 00:03:41.671 "get_zone_info": false, 00:03:41.671 "zone_management": false, 00:03:41.671 "zone_append": false, 00:03:41.671 "compare": false, 00:03:41.671 "compare_and_write": false, 00:03:41.671 "abort": true, 00:03:41.671 "seek_hole": false, 00:03:41.671 "seek_data": false, 00:03:41.671 "copy": true, 00:03:41.671 "nvme_iov_md": false 00:03:41.671 }, 00:03:41.671 "memory_domains": [ 00:03:41.671 { 00:03:41.671 "dma_device_id": "system", 00:03:41.671 "dma_device_type": 1 00:03:41.671 }, 00:03:41.671 { 00:03:41.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.671 "dma_device_type": 2 00:03:41.671 } 00:03:41.671 ], 00:03:41.671 "driver_specific": {} 00:03:41.671 } 00:03:41.671 ]' 00:03:41.671 15:15:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:41.931 15:15:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:41.931 15:15:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:41.931 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:41.931 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.931 [2024-11-06 15:15:59.676503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:41.931 [2024-11-06 15:15:59.676544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:41.931 [2024-11-06 15:15:59.676561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xec3550 00:03:41.931 [2024-11-06 15:15:59.676569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:41.931 [2024-11-06 15:15:59.678110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:41.931 [2024-11-06 15:15:59.678145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:41.931 Passthru0 00:03:41.931 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:41.931 15:15:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:41.931 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:41.931 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.931 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:41.931 15:15:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:41.931 { 00:03:41.931 "name": "Malloc2", 00:03:41.931 "aliases": [ 00:03:41.931 "85b30c25-d4d9-4c3f-95fb-aa80728eac9b" 00:03:41.931 ], 00:03:41.931 "product_name": "Malloc disk", 00:03:41.931 "block_size": 512, 00:03:41.931 "num_blocks": 16384, 00:03:41.931 "uuid": "85b30c25-d4d9-4c3f-95fb-aa80728eac9b", 00:03:41.931 "assigned_rate_limits": { 00:03:41.931 "rw_ios_per_sec": 0, 00:03:41.931 "rw_mbytes_per_sec": 0, 00:03:41.931 "r_mbytes_per_sec": 0, 00:03:41.931 "w_mbytes_per_sec": 0 00:03:41.931 }, 00:03:41.931 "claimed": true, 00:03:41.931 "claim_type": "exclusive_write", 00:03:41.931 "zoned": false, 00:03:41.931 "supported_io_types": { 00:03:41.931 "read": true, 00:03:41.931 "write": true, 00:03:41.931 "unmap": true, 00:03:41.931 "flush": true, 00:03:41.931 "reset": true, 00:03:41.932 "nvme_admin": false, 00:03:41.932 "nvme_io": false, 00:03:41.932 "nvme_io_md": false, 00:03:41.932 "write_zeroes": true, 00:03:41.932 "zcopy": true, 00:03:41.932 "get_zone_info": false, 00:03:41.932 "zone_management": false, 00:03:41.932 "zone_append": false, 00:03:41.932 "compare": false, 00:03:41.932 "compare_and_write": false, 00:03:41.932 "abort": true, 00:03:41.932 "seek_hole": false, 00:03:41.932 "seek_data": false, 00:03:41.932 "copy": true, 00:03:41.932 "nvme_iov_md": false 00:03:41.932 }, 00:03:41.932 "memory_domains": [ 00:03:41.932 { 00:03:41.932 "dma_device_id": "system", 00:03:41.932 "dma_device_type": 1 00:03:41.932 }, 00:03:41.932 { 00:03:41.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.932 "dma_device_type": 2 00:03:41.932 } 00:03:41.932 ], 00:03:41.932 "driver_specific": {} 00:03:41.932 }, 00:03:41.932 { 00:03:41.932 "name": "Passthru0", 00:03:41.932 "aliases": [ 00:03:41.932 "6a83ec22-ec3d-5504-a88e-1e965ce16419" 00:03:41.932 ], 00:03:41.932 "product_name": "passthru", 00:03:41.932 "block_size": 512, 00:03:41.932 "num_blocks": 16384, 00:03:41.932 "uuid": "6a83ec22-ec3d-5504-a88e-1e965ce16419", 00:03:41.932 "assigned_rate_limits": { 00:03:41.932 "rw_ios_per_sec": 0, 00:03:41.932 "rw_mbytes_per_sec": 0, 00:03:41.932 "r_mbytes_per_sec": 0, 00:03:41.932 "w_mbytes_per_sec": 0 00:03:41.932 }, 00:03:41.932 "claimed": false, 00:03:41.932 "zoned": false, 00:03:41.932 "supported_io_types": { 00:03:41.932 "read": true, 00:03:41.932 "write": true, 00:03:41.932 "unmap": true, 00:03:41.932 "flush": true, 00:03:41.932 "reset": true, 00:03:41.932 "nvme_admin": false, 00:03:41.932 "nvme_io": false, 00:03:41.932 "nvme_io_md": false, 00:03:41.932 "write_zeroes": true, 00:03:41.932 "zcopy": true, 00:03:41.932 "get_zone_info": false, 00:03:41.932 "zone_management": false, 00:03:41.932 "zone_append": false, 00:03:41.932 "compare": false, 00:03:41.932 "compare_and_write": false, 00:03:41.932 "abort": true, 00:03:41.932 "seek_hole": false, 00:03:41.932 "seek_data": false, 00:03:41.932 "copy": true, 00:03:41.932 "nvme_iov_md": false 00:03:41.932 }, 00:03:41.932 "memory_domains": [ 00:03:41.932 { 00:03:41.932 "dma_device_id": "system", 00:03:41.932 "dma_device_type": 1 00:03:41.932 }, 00:03:41.932 { 00:03:41.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.932 "dma_device_type": 2 00:03:41.932 } 00:03:41.932 ], 00:03:41.932 "driver_specific": { 00:03:41.932 "passthru": { 00:03:41.932 "name": "Passthru0", 00:03:41.932 "base_bdev_name": "Malloc2" 00:03:41.932 } 00:03:41.932 } 00:03:41.932 } 00:03:41.932 ]' 00:03:41.932 15:15:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:41.932 15:15:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:41.932 15:15:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:41.932 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:41.932 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.932 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:41.932 15:15:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:41.932 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:41.932 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.932 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:41.932 15:15:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:41.932 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:41.932 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.932 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:41.932 15:15:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:41.932 15:15:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:41.932 15:15:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:41.932 00:03:41.932 real 0m0.292s 00:03:41.932 user 0m0.181s 00:03:41.932 sys 0m0.042s 00:03:41.932 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:41.932 15:15:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.932 ************************************ 00:03:41.932 END TEST rpc_daemon_integrity 00:03:41.932 ************************************ 00:03:41.932 15:15:59 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:41.932 15:15:59 rpc -- rpc/rpc.sh@84 -- # killprocess 3532380 00:03:41.932 15:15:59 rpc -- common/autotest_common.sh@952 -- # '[' -z 3532380 ']' 00:03:41.932 15:15:59 rpc -- common/autotest_common.sh@956 -- # kill -0 3532380 00:03:41.932 15:15:59 rpc -- common/autotest_common.sh@957 -- # uname 00:03:41.932 15:15:59 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:41.932 15:15:59 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3532380 00:03:42.192 15:15:59 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:42.193 15:15:59 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:42.193 15:15:59 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3532380' 00:03:42.193 killing process with pid 3532380 00:03:42.193 15:15:59 rpc -- common/autotest_common.sh@971 -- # kill 3532380 00:03:42.193 15:15:59 rpc -- common/autotest_common.sh@976 -- # wait 3532380 00:03:42.453 00:03:42.453 real 0m2.724s 00:03:42.453 user 0m3.434s 00:03:42.453 sys 0m0.870s 00:03:42.453 15:16:00 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:42.453 15:16:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.453 ************************************ 00:03:42.453 END TEST rpc 00:03:42.453 ************************************ 00:03:42.453 15:16:00 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:42.453 15:16:00 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:42.453 15:16:00 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:42.453 15:16:00 -- common/autotest_common.sh@10 -- # set +x 00:03:42.453 ************************************ 00:03:42.453 START TEST skip_rpc 00:03:42.453 ************************************ 00:03:42.453 15:16:00 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:42.453 * Looking for test storage... 00:03:42.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:42.453 15:16:00 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:42.453 15:16:00 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:42.453 15:16:00 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:42.714 15:16:00 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:42.714 15:16:00 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:42.714 15:16:00 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:42.714 15:16:00 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:42.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.714 --rc genhtml_branch_coverage=1 00:03:42.714 --rc genhtml_function_coverage=1 00:03:42.714 --rc genhtml_legend=1 00:03:42.714 --rc geninfo_all_blocks=1 00:03:42.714 --rc geninfo_unexecuted_blocks=1 00:03:42.714 00:03:42.714 ' 00:03:42.714 15:16:00 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:42.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.714 --rc genhtml_branch_coverage=1 00:03:42.714 --rc genhtml_function_coverage=1 00:03:42.714 --rc genhtml_legend=1 00:03:42.714 --rc geninfo_all_blocks=1 00:03:42.714 --rc geninfo_unexecuted_blocks=1 00:03:42.714 00:03:42.714 ' 00:03:42.714 15:16:00 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:42.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.714 --rc genhtml_branch_coverage=1 00:03:42.714 --rc genhtml_function_coverage=1 00:03:42.714 --rc genhtml_legend=1 00:03:42.714 --rc geninfo_all_blocks=1 00:03:42.714 --rc geninfo_unexecuted_blocks=1 00:03:42.714 00:03:42.714 ' 00:03:42.714 15:16:00 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:42.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.714 --rc genhtml_branch_coverage=1 00:03:42.714 --rc genhtml_function_coverage=1 00:03:42.714 --rc genhtml_legend=1 00:03:42.714 --rc geninfo_all_blocks=1 00:03:42.714 --rc geninfo_unexecuted_blocks=1 00:03:42.714 00:03:42.714 ' 00:03:42.714 15:16:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:42.714 15:16:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:42.714 15:16:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:42.714 15:16:00 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:42.714 15:16:00 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:42.714 15:16:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.714 ************************************ 00:03:42.714 START TEST skip_rpc 00:03:42.714 ************************************ 00:03:42.714 15:16:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:03:42.714 15:16:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3533234 00:03:42.714 15:16:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:42.714 15:16:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:42.714 15:16:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:42.714 [2024-11-06 15:16:00.570210] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:03:42.714 [2024-11-06 15:16:00.570267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3533234 ] 00:03:42.714 [2024-11-06 15:16:00.663058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.974 [2024-11-06 15:16:00.715029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3533234 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 3533234 ']' 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 3533234 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3533234 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3533234' 00:03:48.257 killing process with pid 3533234 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 3533234 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 3533234 00:03:48.257 00:03:48.257 real 0m5.266s 00:03:48.257 user 0m5.011s 00:03:48.257 sys 0m0.303s 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:48.257 15:16:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.257 ************************************ 00:03:48.257 END TEST skip_rpc 00:03:48.257 ************************************ 00:03:48.257 15:16:05 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:48.257 15:16:05 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:48.257 15:16:05 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:48.257 15:16:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.257 ************************************ 00:03:48.257 START TEST skip_rpc_with_json 00:03:48.257 ************************************ 00:03:48.257 15:16:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:03:48.257 15:16:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:48.257 15:16:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3534271 00:03:48.257 15:16:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:48.257 15:16:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:48.257 15:16:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3534271 00:03:48.257 15:16:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 3534271 ']' 00:03:48.257 15:16:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:48.257 15:16:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:48.257 15:16:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:48.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:48.257 15:16:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:48.257 15:16:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:48.257 [2024-11-06 15:16:05.912628] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:03:48.257 [2024-11-06 15:16:05.912678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3534271 ] 00:03:48.257 [2024-11-06 15:16:05.996922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.257 [2024-11-06 15:16:06.028104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.828 15:16:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:48.828 15:16:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:03:48.828 15:16:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:48.828 15:16:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.828 15:16:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:48.828 [2024-11-06 15:16:06.692078] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:48.828 request: 00:03:48.828 { 00:03:48.828 "trtype": "tcp", 00:03:48.828 "method": "nvmf_get_transports", 00:03:48.828 "req_id": 1 00:03:48.828 } 00:03:48.828 Got JSON-RPC error response 00:03:48.828 response: 00:03:48.828 { 00:03:48.828 "code": -19, 00:03:48.828 "message": "No such device" 00:03:48.828 } 00:03:48.828 15:16:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:48.828 15:16:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:48.828 15:16:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.828 15:16:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:48.828 [2024-11-06 15:16:06.704171] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:48.828 15:16:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.828 15:16:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:48.828 15:16:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.828 15:16:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:49.088 15:16:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.088 15:16:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:49.088 { 00:03:49.088 "subsystems": [ 00:03:49.088 { 00:03:49.088 "subsystem": "fsdev", 00:03:49.088 "config": [ 00:03:49.088 { 00:03:49.088 "method": "fsdev_set_opts", 00:03:49.088 "params": { 00:03:49.088 "fsdev_io_pool_size": 65535, 00:03:49.088 "fsdev_io_cache_size": 256 00:03:49.088 } 00:03:49.088 } 00:03:49.088 ] 00:03:49.088 }, 00:03:49.088 { 00:03:49.088 "subsystem": "vfio_user_target", 00:03:49.088 "config": null 00:03:49.088 }, 00:03:49.088 { 00:03:49.088 "subsystem": "keyring", 00:03:49.088 "config": [] 00:03:49.088 }, 00:03:49.088 { 00:03:49.088 "subsystem": "iobuf", 00:03:49.088 "config": [ 00:03:49.088 { 00:03:49.088 "method": "iobuf_set_options", 00:03:49.088 "params": { 00:03:49.088 "small_pool_count": 8192, 00:03:49.088 "large_pool_count": 1024, 00:03:49.088 "small_bufsize": 8192, 00:03:49.088 "large_bufsize": 135168, 00:03:49.088 "enable_numa": false 00:03:49.088 } 00:03:49.088 } 00:03:49.088 ] 00:03:49.088 }, 00:03:49.088 { 00:03:49.088 "subsystem": "sock", 00:03:49.088 "config": [ 00:03:49.088 { 00:03:49.088 "method": "sock_set_default_impl", 00:03:49.088 "params": { 00:03:49.088 "impl_name": "posix" 00:03:49.088 } 00:03:49.088 }, 00:03:49.088 { 00:03:49.088 "method": "sock_impl_set_options", 00:03:49.088 "params": { 00:03:49.088 "impl_name": "ssl", 00:03:49.088 "recv_buf_size": 4096, 00:03:49.088 "send_buf_size": 4096, 00:03:49.088 "enable_recv_pipe": true, 00:03:49.088 "enable_quickack": false, 00:03:49.088 "enable_placement_id": 0, 00:03:49.088 "enable_zerocopy_send_server": true, 00:03:49.088 "enable_zerocopy_send_client": false, 00:03:49.088 "zerocopy_threshold": 0, 00:03:49.088 "tls_version": 0, 00:03:49.088 "enable_ktls": false 00:03:49.088 } 00:03:49.088 }, 00:03:49.088 { 00:03:49.088 "method": "sock_impl_set_options", 00:03:49.088 "params": { 00:03:49.088 "impl_name": "posix", 00:03:49.088 "recv_buf_size": 2097152, 00:03:49.088 "send_buf_size": 2097152, 00:03:49.088 "enable_recv_pipe": true, 00:03:49.088 "enable_quickack": false, 00:03:49.088 "enable_placement_id": 0, 00:03:49.088 "enable_zerocopy_send_server": true, 00:03:49.088 "enable_zerocopy_send_client": false, 00:03:49.088 "zerocopy_threshold": 0, 00:03:49.088 "tls_version": 0, 00:03:49.088 "enable_ktls": false 00:03:49.088 } 00:03:49.088 } 00:03:49.088 ] 00:03:49.088 }, 00:03:49.088 { 00:03:49.088 "subsystem": "vmd", 00:03:49.088 "config": [] 00:03:49.088 }, 00:03:49.088 { 00:03:49.088 "subsystem": "accel", 00:03:49.088 "config": [ 00:03:49.088 { 00:03:49.088 "method": "accel_set_options", 00:03:49.088 "params": { 00:03:49.088 "small_cache_size": 128, 00:03:49.088 "large_cache_size": 16, 00:03:49.088 "task_count": 2048, 00:03:49.088 "sequence_count": 2048, 00:03:49.088 "buf_count": 2048 00:03:49.088 } 00:03:49.088 } 00:03:49.088 ] 00:03:49.088 }, 00:03:49.088 { 00:03:49.088 "subsystem": "bdev", 00:03:49.088 "config": [ 00:03:49.088 { 00:03:49.088 "method": "bdev_set_options", 00:03:49.088 "params": { 00:03:49.088 "bdev_io_pool_size": 65535, 00:03:49.088 "bdev_io_cache_size": 256, 00:03:49.088 "bdev_auto_examine": true, 00:03:49.088 "iobuf_small_cache_size": 128, 00:03:49.088 "iobuf_large_cache_size": 16 00:03:49.088 } 00:03:49.088 }, 00:03:49.088 { 00:03:49.088 "method": "bdev_raid_set_options", 00:03:49.088 "params": { 00:03:49.088 "process_window_size_kb": 1024, 00:03:49.088 "process_max_bandwidth_mb_sec": 0 00:03:49.088 } 00:03:49.088 }, 00:03:49.088 { 00:03:49.088 "method": "bdev_iscsi_set_options", 00:03:49.088 "params": { 00:03:49.088 "timeout_sec": 30 00:03:49.088 } 00:03:49.088 }, 00:03:49.088 { 00:03:49.088 "method": "bdev_nvme_set_options", 00:03:49.088 "params": { 00:03:49.088 "action_on_timeout": "none", 00:03:49.088 "timeout_us": 0, 00:03:49.088 "timeout_admin_us": 0, 00:03:49.088 "keep_alive_timeout_ms": 10000, 00:03:49.088 "arbitration_burst": 0, 00:03:49.088 "low_priority_weight": 0, 00:03:49.088 "medium_priority_weight": 0, 00:03:49.089 "high_priority_weight": 0, 00:03:49.089 "nvme_adminq_poll_period_us": 10000, 00:03:49.089 "nvme_ioq_poll_period_us": 0, 00:03:49.089 "io_queue_requests": 0, 00:03:49.089 "delay_cmd_submit": true, 00:03:49.089 "transport_retry_count": 4, 00:03:49.089 "bdev_retry_count": 3, 00:03:49.089 "transport_ack_timeout": 0, 00:03:49.089 "ctrlr_loss_timeout_sec": 0, 00:03:49.089 "reconnect_delay_sec": 0, 00:03:49.089 "fast_io_fail_timeout_sec": 0, 00:03:49.089 "disable_auto_failback": false, 00:03:49.089 "generate_uuids": false, 00:03:49.089 "transport_tos": 0, 00:03:49.089 "nvme_error_stat": false, 00:03:49.089 "rdma_srq_size": 0, 00:03:49.089 "io_path_stat": false, 00:03:49.089 "allow_accel_sequence": false, 00:03:49.089 "rdma_max_cq_size": 0, 00:03:49.089 "rdma_cm_event_timeout_ms": 0, 00:03:49.089 "dhchap_digests": [ 00:03:49.089 "sha256", 00:03:49.089 "sha384", 00:03:49.089 "sha512" 00:03:49.089 ], 00:03:49.089 "dhchap_dhgroups": [ 00:03:49.089 "null", 00:03:49.089 "ffdhe2048", 00:03:49.089 "ffdhe3072", 00:03:49.089 "ffdhe4096", 00:03:49.089 "ffdhe6144", 00:03:49.089 "ffdhe8192" 00:03:49.089 ] 00:03:49.089 } 00:03:49.089 }, 00:03:49.089 { 00:03:49.089 "method": "bdev_nvme_set_hotplug", 00:03:49.089 "params": { 00:03:49.089 "period_us": 100000, 00:03:49.089 "enable": false 00:03:49.089 } 00:03:49.089 }, 00:03:49.089 { 00:03:49.089 "method": "bdev_wait_for_examine" 00:03:49.089 } 00:03:49.089 ] 00:03:49.089 }, 00:03:49.089 { 00:03:49.089 "subsystem": "scsi", 00:03:49.089 "config": null 00:03:49.089 }, 00:03:49.089 { 00:03:49.089 "subsystem": "scheduler", 00:03:49.089 "config": [ 00:03:49.089 { 00:03:49.089 "method": "framework_set_scheduler", 00:03:49.089 "params": { 00:03:49.089 "name": "static" 00:03:49.089 } 00:03:49.089 } 00:03:49.089 ] 00:03:49.089 }, 00:03:49.089 { 00:03:49.089 "subsystem": "vhost_scsi", 00:03:49.089 "config": [] 00:03:49.089 }, 00:03:49.089 { 00:03:49.089 "subsystem": "vhost_blk", 00:03:49.089 "config": [] 00:03:49.089 }, 00:03:49.089 { 00:03:49.089 "subsystem": "ublk", 00:03:49.089 "config": [] 00:03:49.089 }, 00:03:49.089 { 00:03:49.089 "subsystem": "nbd", 00:03:49.089 "config": [] 00:03:49.089 }, 00:03:49.089 { 00:03:49.089 "subsystem": "nvmf", 00:03:49.089 "config": [ 00:03:49.089 { 00:03:49.089 "method": "nvmf_set_config", 00:03:49.089 "params": { 00:03:49.089 "discovery_filter": "match_any", 00:03:49.089 "admin_cmd_passthru": { 00:03:49.089 "identify_ctrlr": false 00:03:49.089 }, 00:03:49.089 "dhchap_digests": [ 00:03:49.089 "sha256", 00:03:49.089 "sha384", 00:03:49.089 "sha512" 00:03:49.089 ], 00:03:49.089 "dhchap_dhgroups": [ 00:03:49.089 "null", 00:03:49.089 "ffdhe2048", 00:03:49.089 "ffdhe3072", 00:03:49.089 "ffdhe4096", 00:03:49.089 "ffdhe6144", 00:03:49.089 "ffdhe8192" 00:03:49.089 ] 00:03:49.089 } 00:03:49.089 }, 00:03:49.089 { 00:03:49.089 "method": "nvmf_set_max_subsystems", 00:03:49.089 "params": { 00:03:49.089 "max_subsystems": 1024 00:03:49.089 } 00:03:49.089 }, 00:03:49.089 { 00:03:49.089 "method": "nvmf_set_crdt", 00:03:49.089 "params": { 00:03:49.089 "crdt1": 0, 00:03:49.089 "crdt2": 0, 00:03:49.089 "crdt3": 0 00:03:49.089 } 00:03:49.089 }, 00:03:49.089 { 00:03:49.089 "method": "nvmf_create_transport", 00:03:49.089 "params": { 00:03:49.089 "trtype": "TCP", 00:03:49.089 "max_queue_depth": 128, 00:03:49.089 "max_io_qpairs_per_ctrlr": 127, 00:03:49.089 "in_capsule_data_size": 4096, 00:03:49.089 "max_io_size": 131072, 00:03:49.089 "io_unit_size": 131072, 00:03:49.089 "max_aq_depth": 128, 00:03:49.089 "num_shared_buffers": 511, 00:03:49.089 "buf_cache_size": 4294967295, 00:03:49.089 "dif_insert_or_strip": false, 00:03:49.089 "zcopy": false, 00:03:49.089 "c2h_success": true, 00:03:49.089 "sock_priority": 0, 00:03:49.089 "abort_timeout_sec": 1, 00:03:49.089 "ack_timeout": 0, 00:03:49.089 "data_wr_pool_size": 0 00:03:49.089 } 00:03:49.089 } 00:03:49.089 ] 00:03:49.089 }, 00:03:49.089 { 00:03:49.089 "subsystem": "iscsi", 00:03:49.089 "config": [ 00:03:49.089 { 00:03:49.089 "method": "iscsi_set_options", 00:03:49.089 "params": { 00:03:49.089 "node_base": "iqn.2016-06.io.spdk", 00:03:49.089 "max_sessions": 128, 00:03:49.089 "max_connections_per_session": 2, 00:03:49.089 "max_queue_depth": 64, 00:03:49.089 "default_time2wait": 2, 00:03:49.089 "default_time2retain": 20, 00:03:49.089 "first_burst_length": 8192, 00:03:49.089 "immediate_data": true, 00:03:49.089 "allow_duplicated_isid": false, 00:03:49.089 "error_recovery_level": 0, 00:03:49.089 "nop_timeout": 60, 00:03:49.089 "nop_in_interval": 30, 00:03:49.089 "disable_chap": false, 00:03:49.089 "require_chap": false, 00:03:49.089 "mutual_chap": false, 00:03:49.089 "chap_group": 0, 00:03:49.089 "max_large_datain_per_connection": 64, 00:03:49.089 "max_r2t_per_connection": 4, 00:03:49.089 "pdu_pool_size": 36864, 00:03:49.089 "immediate_data_pool_size": 16384, 00:03:49.089 "data_out_pool_size": 2048 00:03:49.089 } 00:03:49.089 } 00:03:49.089 ] 00:03:49.089 } 00:03:49.089 ] 00:03:49.089 } 00:03:49.089 15:16:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:49.089 15:16:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3534271 00:03:49.089 15:16:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3534271 ']' 00:03:49.089 15:16:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3534271 00:03:49.089 15:16:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:03:49.089 15:16:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:49.089 15:16:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3534271 00:03:49.089 15:16:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:49.089 15:16:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:49.089 15:16:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3534271' 00:03:49.089 killing process with pid 3534271 00:03:49.089 15:16:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3534271 00:03:49.089 15:16:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3534271 00:03:49.349 15:16:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3534609 00:03:49.349 15:16:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:49.349 15:16:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:54.629 15:16:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3534609 00:03:54.629 15:16:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3534609 ']' 00:03:54.629 15:16:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3534609 00:03:54.629 15:16:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3534609 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3534609' 00:03:54.630 killing process with pid 3534609 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3534609 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3534609 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:54.630 00:03:54.630 real 0m6.548s 00:03:54.630 user 0m6.450s 00:03:54.630 sys 0m0.567s 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:54.630 ************************************ 00:03:54.630 END TEST skip_rpc_with_json 00:03:54.630 ************************************ 00:03:54.630 15:16:12 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:54.630 15:16:12 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:54.630 15:16:12 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:54.630 15:16:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.630 ************************************ 00:03:54.630 START TEST skip_rpc_with_delay 00:03:54.630 ************************************ 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:54.630 [2024-11-06 15:16:12.544276] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:54.630 00:03:54.630 real 0m0.082s 00:03:54.630 user 0m0.056s 00:03:54.630 sys 0m0.025s 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:54.630 15:16:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:54.630 ************************************ 00:03:54.630 END TEST skip_rpc_with_delay 00:03:54.630 ************************************ 00:03:54.630 15:16:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:54.630 15:16:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:54.630 15:16:12 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:54.630 15:16:12 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:54.630 15:16:12 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:54.630 15:16:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.890 ************************************ 00:03:54.890 START TEST exit_on_failed_rpc_init 00:03:54.891 ************************************ 00:03:54.891 15:16:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:03:54.891 15:16:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3535681 00:03:54.891 15:16:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3535681 00:03:54.891 15:16:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:54.891 15:16:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 3535681 ']' 00:03:54.891 15:16:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:54.891 15:16:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:54.891 15:16:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:54.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:54.891 15:16:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:54.891 15:16:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:54.891 [2024-11-06 15:16:12.698961] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:03:54.891 [2024-11-06 15:16:12.699009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3535681 ] 00:03:54.891 [2024-11-06 15:16:12.784325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.891 [2024-11-06 15:16:12.815071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:55.831 [2024-11-06 15:16:13.567270] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:03:55.831 [2024-11-06 15:16:13.567322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3535828 ] 00:03:55.831 [2024-11-06 15:16:13.657377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.831 [2024-11-06 15:16:13.693652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:55.831 [2024-11-06 15:16:13.693702] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:55.831 [2024-11-06 15:16:13.693712] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:55.831 [2024-11-06 15:16:13.693719] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3535681 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 3535681 ']' 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 3535681 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3535681 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3535681' 00:03:55.831 killing process with pid 3535681 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 3535681 00:03:55.831 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 3535681 00:03:56.092 00:03:56.092 real 0m1.341s 00:03:56.092 user 0m1.576s 00:03:56.092 sys 0m0.392s 00:03:56.092 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:56.092 15:16:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:56.092 ************************************ 00:03:56.092 END TEST exit_on_failed_rpc_init 00:03:56.092 ************************************ 00:03:56.092 15:16:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:56.092 00:03:56.092 real 0m13.755s 00:03:56.092 user 0m13.326s 00:03:56.092 sys 0m1.602s 00:03:56.092 15:16:14 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:56.092 15:16:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.092 ************************************ 00:03:56.092 END TEST skip_rpc 00:03:56.092 ************************************ 00:03:56.092 15:16:14 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:56.092 15:16:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:56.092 15:16:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:56.092 15:16:14 -- common/autotest_common.sh@10 -- # set +x 00:03:56.353 ************************************ 00:03:56.353 START TEST rpc_client 00:03:56.353 ************************************ 00:03:56.353 15:16:14 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:56.353 * Looking for test storage... 00:03:56.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:56.353 15:16:14 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:56.353 15:16:14 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:03:56.353 15:16:14 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:56.353 15:16:14 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:56.353 15:16:14 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:56.353 15:16:14 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.353 15:16:14 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:56.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.353 --rc genhtml_branch_coverage=1 00:03:56.353 --rc genhtml_function_coverage=1 00:03:56.353 --rc genhtml_legend=1 00:03:56.353 --rc geninfo_all_blocks=1 00:03:56.353 --rc geninfo_unexecuted_blocks=1 00:03:56.353 00:03:56.353 ' 00:03:56.353 15:16:14 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:56.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.353 --rc genhtml_branch_coverage=1 00:03:56.353 --rc genhtml_function_coverage=1 00:03:56.353 --rc genhtml_legend=1 00:03:56.353 --rc geninfo_all_blocks=1 00:03:56.353 --rc geninfo_unexecuted_blocks=1 00:03:56.353 00:03:56.353 ' 00:03:56.353 15:16:14 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:56.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.353 --rc genhtml_branch_coverage=1 00:03:56.353 --rc genhtml_function_coverage=1 00:03:56.353 --rc genhtml_legend=1 00:03:56.353 --rc geninfo_all_blocks=1 00:03:56.353 --rc geninfo_unexecuted_blocks=1 00:03:56.353 00:03:56.353 ' 00:03:56.353 15:16:14 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:56.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.353 --rc genhtml_branch_coverage=1 00:03:56.353 --rc genhtml_function_coverage=1 00:03:56.353 --rc genhtml_legend=1 00:03:56.353 --rc geninfo_all_blocks=1 00:03:56.353 --rc geninfo_unexecuted_blocks=1 00:03:56.353 00:03:56.353 ' 00:03:56.353 15:16:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:56.353 OK 00:03:56.353 15:16:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:56.353 00:03:56.353 real 0m0.222s 00:03:56.353 user 0m0.132s 00:03:56.353 sys 0m0.104s 00:03:56.353 15:16:14 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:56.353 15:16:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:56.353 ************************************ 00:03:56.353 END TEST rpc_client 00:03:56.353 ************************************ 00:03:56.615 15:16:14 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:56.615 15:16:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:56.615 15:16:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:56.615 15:16:14 -- common/autotest_common.sh@10 -- # set +x 00:03:56.615 ************************************ 00:03:56.615 START TEST json_config 00:03:56.615 ************************************ 00:03:56.615 15:16:14 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:56.615 15:16:14 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:56.615 15:16:14 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:03:56.615 15:16:14 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:56.615 15:16:14 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:56.615 15:16:14 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:56.615 15:16:14 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:56.615 15:16:14 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:56.615 15:16:14 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.615 15:16:14 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:56.615 15:16:14 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:56.615 15:16:14 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:56.615 15:16:14 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:56.615 15:16:14 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:56.615 15:16:14 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:56.615 15:16:14 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:56.615 15:16:14 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:56.615 15:16:14 json_config -- scripts/common.sh@345 -- # : 1 00:03:56.615 15:16:14 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:56.615 15:16:14 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.615 15:16:14 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:56.615 15:16:14 json_config -- scripts/common.sh@353 -- # local d=1 00:03:56.615 15:16:14 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.615 15:16:14 json_config -- scripts/common.sh@355 -- # echo 1 00:03:56.615 15:16:14 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:56.615 15:16:14 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:56.615 15:16:14 json_config -- scripts/common.sh@353 -- # local d=2 00:03:56.615 15:16:14 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.615 15:16:14 json_config -- scripts/common.sh@355 -- # echo 2 00:03:56.615 15:16:14 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:56.615 15:16:14 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:56.615 15:16:14 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:56.615 15:16:14 json_config -- scripts/common.sh@368 -- # return 0 00:03:56.615 15:16:14 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.615 15:16:14 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:56.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.615 --rc genhtml_branch_coverage=1 00:03:56.615 --rc genhtml_function_coverage=1 00:03:56.615 --rc genhtml_legend=1 00:03:56.615 --rc geninfo_all_blocks=1 00:03:56.615 --rc geninfo_unexecuted_blocks=1 00:03:56.615 00:03:56.615 ' 00:03:56.615 15:16:14 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:56.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.615 --rc genhtml_branch_coverage=1 00:03:56.615 --rc genhtml_function_coverage=1 00:03:56.615 --rc genhtml_legend=1 00:03:56.615 --rc geninfo_all_blocks=1 00:03:56.615 --rc geninfo_unexecuted_blocks=1 00:03:56.615 00:03:56.615 ' 00:03:56.615 15:16:14 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:56.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.615 --rc genhtml_branch_coverage=1 00:03:56.615 --rc genhtml_function_coverage=1 00:03:56.615 --rc genhtml_legend=1 00:03:56.615 --rc geninfo_all_blocks=1 00:03:56.615 --rc geninfo_unexecuted_blocks=1 00:03:56.615 00:03:56.615 ' 00:03:56.615 15:16:14 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:56.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.615 --rc genhtml_branch_coverage=1 00:03:56.615 --rc genhtml_function_coverage=1 00:03:56.615 --rc genhtml_legend=1 00:03:56.615 --rc geninfo_all_blocks=1 00:03:56.615 --rc geninfo_unexecuted_blocks=1 00:03:56.615 00:03:56.615 ' 00:03:56.615 15:16:14 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:56.615 15:16:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:56.615 15:16:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:56.615 15:16:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:56.615 15:16:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:56.615 15:16:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:56.615 15:16:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:56.615 15:16:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:56.615 15:16:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:56.615 15:16:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:56.615 15:16:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:56.615 15:16:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:56.615 15:16:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:03:56.615 15:16:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:03:56.615 15:16:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:56.615 15:16:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:56.615 15:16:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:56.615 15:16:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:56.615 15:16:14 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:56.615 15:16:14 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:56.615 15:16:14 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:56.615 15:16:14 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:56.615 15:16:14 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:56.615 15:16:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.615 15:16:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.615 15:16:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.615 15:16:14 json_config -- paths/export.sh@5 -- # export PATH 00:03:56.615 15:16:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.877 15:16:14 json_config -- nvmf/common.sh@51 -- # : 0 00:03:56.877 15:16:14 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:56.877 15:16:14 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:56.877 15:16:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:56.877 15:16:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:56.877 15:16:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:56.877 15:16:14 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:56.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:56.877 15:16:14 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:56.877 15:16:14 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:56.877 15:16:14 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:56.877 15:16:14 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:56.877 15:16:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:56.877 15:16:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:56.877 15:16:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:56.877 15:16:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:56.877 15:16:14 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:56.877 15:16:14 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:56.877 15:16:14 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:56.877 15:16:14 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:56.877 15:16:14 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:56.877 15:16:14 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:56.877 15:16:14 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:56.877 15:16:14 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:56.877 15:16:14 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:56.877 15:16:14 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:56.877 15:16:14 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:56.877 INFO: JSON configuration test init 00:03:56.877 15:16:14 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:56.877 15:16:14 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:56.877 15:16:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:56.877 15:16:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.877 15:16:14 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:56.877 15:16:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:56.877 15:16:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.877 15:16:14 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:56.877 15:16:14 json_config -- json_config/common.sh@9 -- # local app=target 00:03:56.877 15:16:14 json_config -- json_config/common.sh@10 -- # shift 00:03:56.877 15:16:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:56.877 15:16:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:56.877 15:16:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:56.877 15:16:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:56.877 15:16:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:56.877 15:16:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3536151 00:03:56.877 15:16:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:56.877 Waiting for target to run... 00:03:56.877 15:16:14 json_config -- json_config/common.sh@25 -- # waitforlisten 3536151 /var/tmp/spdk_tgt.sock 00:03:56.877 15:16:14 json_config -- common/autotest_common.sh@833 -- # '[' -z 3536151 ']' 00:03:56.877 15:16:14 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:56.877 15:16:14 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:56.877 15:16:14 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:56.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:56.877 15:16:14 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:56.877 15:16:14 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:56.877 15:16:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.877 [2024-11-06 15:16:14.679596] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:03:56.877 [2024-11-06 15:16:14.679665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3536151 ] 00:03:57.137 [2024-11-06 15:16:14.986736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.137 [2024-11-06 15:16:15.012108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.709 15:16:15 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:57.709 15:16:15 json_config -- common/autotest_common.sh@866 -- # return 0 00:03:57.709 15:16:15 json_config -- json_config/common.sh@26 -- # echo '' 00:03:57.709 00:03:57.709 15:16:15 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:57.709 15:16:15 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:57.709 15:16:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:57.709 15:16:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.709 15:16:15 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:57.709 15:16:15 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:57.709 15:16:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:57.709 15:16:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.709 15:16:15 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:57.709 15:16:15 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:57.709 15:16:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:58.280 15:16:16 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:58.280 15:16:16 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:58.280 15:16:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:58.280 15:16:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.280 15:16:16 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:58.280 15:16:16 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:58.280 15:16:16 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:58.280 15:16:16 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:58.280 15:16:16 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:58.280 15:16:16 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:58.280 15:16:16 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:58.280 15:16:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:58.280 15:16:16 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:58.280 15:16:16 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:58.280 15:16:16 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:58.280 15:16:16 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:58.280 15:16:16 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:58.280 15:16:16 json_config -- json_config/json_config.sh@54 -- # sort 00:03:58.280 15:16:16 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:58.280 15:16:16 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:58.280 15:16:16 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:58.280 15:16:16 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:58.280 15:16:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:58.280 15:16:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.541 15:16:16 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:58.541 15:16:16 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:58.541 15:16:16 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:58.541 15:16:16 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:58.541 15:16:16 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:58.541 15:16:16 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:58.541 15:16:16 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:58.541 15:16:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:58.541 15:16:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.541 15:16:16 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:58.541 15:16:16 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:58.541 15:16:16 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:58.541 15:16:16 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:58.541 15:16:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:58.541 MallocForNvmf0 00:03:58.541 15:16:16 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:58.541 15:16:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:58.802 MallocForNvmf1 00:03:58.802 15:16:16 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:58.802 15:16:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:59.064 [2024-11-06 15:16:16.819948] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:59.064 15:16:16 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:59.064 15:16:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:59.064 15:16:17 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:59.064 15:16:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:59.324 15:16:17 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:59.324 15:16:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:59.585 15:16:17 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:59.585 15:16:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:59.585 [2024-11-06 15:16:17.526095] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:59.585 15:16:17 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:59.585 15:16:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:59.585 15:16:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.846 15:16:17 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:59.846 15:16:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:59.846 15:16:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.846 15:16:17 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:59.846 15:16:17 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:59.846 15:16:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:59.846 MallocBdevForConfigChangeCheck 00:03:59.846 15:16:17 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:59.846 15:16:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:59.846 15:16:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.106 15:16:17 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:00.106 15:16:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:00.366 15:16:18 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:00.366 INFO: shutting down applications... 00:04:00.366 15:16:18 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:00.366 15:16:18 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:00.366 15:16:18 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:00.366 15:16:18 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:00.627 Calling clear_iscsi_subsystem 00:04:00.627 Calling clear_nvmf_subsystem 00:04:00.627 Calling clear_nbd_subsystem 00:04:00.627 Calling clear_ublk_subsystem 00:04:00.627 Calling clear_vhost_blk_subsystem 00:04:00.627 Calling clear_vhost_scsi_subsystem 00:04:00.627 Calling clear_bdev_subsystem 00:04:00.887 15:16:18 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:00.887 15:16:18 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:00.887 15:16:18 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:00.887 15:16:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:00.887 15:16:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:00.887 15:16:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:01.147 15:16:19 json_config -- json_config/json_config.sh@352 -- # break 00:04:01.147 15:16:19 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:01.147 15:16:19 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:01.147 15:16:19 json_config -- json_config/common.sh@31 -- # local app=target 00:04:01.147 15:16:19 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:01.147 15:16:19 json_config -- json_config/common.sh@35 -- # [[ -n 3536151 ]] 00:04:01.147 15:16:19 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3536151 00:04:01.147 15:16:19 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:01.147 15:16:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:01.147 15:16:19 json_config -- json_config/common.sh@41 -- # kill -0 3536151 00:04:01.147 15:16:19 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:01.720 15:16:19 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:01.720 15:16:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:01.720 15:16:19 json_config -- json_config/common.sh@41 -- # kill -0 3536151 00:04:01.720 15:16:19 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:01.720 15:16:19 json_config -- json_config/common.sh@43 -- # break 00:04:01.720 15:16:19 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:01.720 15:16:19 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:01.720 SPDK target shutdown done 00:04:01.720 15:16:19 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:01.720 INFO: relaunching applications... 00:04:01.720 15:16:19 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:01.720 15:16:19 json_config -- json_config/common.sh@9 -- # local app=target 00:04:01.720 15:16:19 json_config -- json_config/common.sh@10 -- # shift 00:04:01.720 15:16:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:01.720 15:16:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:01.720 15:16:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:01.720 15:16:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:01.720 15:16:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:01.720 15:16:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3537284 00:04:01.720 15:16:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:01.720 Waiting for target to run... 00:04:01.720 15:16:19 json_config -- json_config/common.sh@25 -- # waitforlisten 3537284 /var/tmp/spdk_tgt.sock 00:04:01.720 15:16:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:01.720 15:16:19 json_config -- common/autotest_common.sh@833 -- # '[' -z 3537284 ']' 00:04:01.721 15:16:19 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:01.721 15:16:19 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:01.721 15:16:19 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:01.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:01.721 15:16:19 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:01.721 15:16:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.721 [2024-11-06 15:16:19.582322] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:01.721 [2024-11-06 15:16:19.582383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3537284 ] 00:04:02.292 [2024-11-06 15:16:19.985195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.292 [2024-11-06 15:16:20.011041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.553 [2024-11-06 15:16:20.510404] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:02.814 [2024-11-06 15:16:20.542766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:02.814 15:16:20 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:02.814 15:16:20 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:02.814 15:16:20 json_config -- json_config/common.sh@26 -- # echo '' 00:04:02.814 00:04:02.814 15:16:20 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:02.814 15:16:20 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:02.814 INFO: Checking if target configuration is the same... 00:04:02.814 15:16:20 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:02.814 15:16:20 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:02.814 15:16:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:02.814 + '[' 2 -ne 2 ']' 00:04:02.814 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:02.814 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:02.814 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:02.814 +++ basename /dev/fd/62 00:04:02.814 ++ mktemp /tmp/62.XXX 00:04:02.814 + tmp_file_1=/tmp/62.6zU 00:04:02.814 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:02.814 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:02.814 + tmp_file_2=/tmp/spdk_tgt_config.json.7Et 00:04:02.814 + ret=0 00:04:02.814 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:03.074 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:03.075 + diff -u /tmp/62.6zU /tmp/spdk_tgt_config.json.7Et 00:04:03.075 + echo 'INFO: JSON config files are the same' 00:04:03.075 INFO: JSON config files are the same 00:04:03.075 + rm /tmp/62.6zU /tmp/spdk_tgt_config.json.7Et 00:04:03.075 + exit 0 00:04:03.075 15:16:20 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:03.075 15:16:20 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:03.075 INFO: changing configuration and checking if this can be detected... 00:04:03.075 15:16:20 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:03.075 15:16:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:03.335 15:16:21 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:03.335 15:16:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:03.335 15:16:21 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.335 + '[' 2 -ne 2 ']' 00:04:03.335 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:03.335 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:03.335 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:03.335 +++ basename /dev/fd/62 00:04:03.335 ++ mktemp /tmp/62.XXX 00:04:03.335 + tmp_file_1=/tmp/62.cEM 00:04:03.335 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.335 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:03.335 + tmp_file_2=/tmp/spdk_tgt_config.json.Ivd 00:04:03.335 + ret=0 00:04:03.335 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:03.596 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:03.596 + diff -u /tmp/62.cEM /tmp/spdk_tgt_config.json.Ivd 00:04:03.596 + ret=1 00:04:03.596 + echo '=== Start of file: /tmp/62.cEM ===' 00:04:03.596 + cat /tmp/62.cEM 00:04:03.596 + echo '=== End of file: /tmp/62.cEM ===' 00:04:03.596 + echo '' 00:04:03.596 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Ivd ===' 00:04:03.596 + cat /tmp/spdk_tgt_config.json.Ivd 00:04:03.596 + echo '=== End of file: /tmp/spdk_tgt_config.json.Ivd ===' 00:04:03.596 + echo '' 00:04:03.596 + rm /tmp/62.cEM /tmp/spdk_tgt_config.json.Ivd 00:04:03.596 + exit 1 00:04:03.596 15:16:21 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:03.596 INFO: configuration change detected. 00:04:03.596 15:16:21 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:03.596 15:16:21 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:03.596 15:16:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.596 15:16:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.596 15:16:21 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:03.596 15:16:21 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:03.596 15:16:21 json_config -- json_config/json_config.sh@324 -- # [[ -n 3537284 ]] 00:04:03.596 15:16:21 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:03.596 15:16:21 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:03.596 15:16:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.596 15:16:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.596 15:16:21 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:03.596 15:16:21 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:03.596 15:16:21 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:03.596 15:16:21 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:03.596 15:16:21 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:03.596 15:16:21 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:03.596 15:16:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:03.596 15:16:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.857 15:16:21 json_config -- json_config/json_config.sh@330 -- # killprocess 3537284 00:04:03.857 15:16:21 json_config -- common/autotest_common.sh@952 -- # '[' -z 3537284 ']' 00:04:03.857 15:16:21 json_config -- common/autotest_common.sh@956 -- # kill -0 3537284 00:04:03.857 15:16:21 json_config -- common/autotest_common.sh@957 -- # uname 00:04:03.857 15:16:21 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:03.857 15:16:21 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3537284 00:04:03.857 15:16:21 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:03.857 15:16:21 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:03.857 15:16:21 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3537284' 00:04:03.857 killing process with pid 3537284 00:04:03.857 15:16:21 json_config -- common/autotest_common.sh@971 -- # kill 3537284 00:04:03.857 15:16:21 json_config -- common/autotest_common.sh@976 -- # wait 3537284 00:04:04.117 15:16:21 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:04.117 15:16:21 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:04.117 15:16:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:04.117 15:16:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.117 15:16:21 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:04.117 15:16:21 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:04.117 INFO: Success 00:04:04.117 00:04:04.117 real 0m7.578s 00:04:04.117 user 0m9.081s 00:04:04.117 sys 0m2.097s 00:04:04.117 15:16:21 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:04.117 15:16:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.117 ************************************ 00:04:04.117 END TEST json_config 00:04:04.117 ************************************ 00:04:04.117 15:16:22 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:04.117 15:16:22 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:04.117 15:16:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:04.117 15:16:22 -- common/autotest_common.sh@10 -- # set +x 00:04:04.117 ************************************ 00:04:04.117 START TEST json_config_extra_key 00:04:04.117 ************************************ 00:04:04.117 15:16:22 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:04.378 15:16:22 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:04.378 15:16:22 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:04.378 15:16:22 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:04.379 15:16:22 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:04.379 15:16:22 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.379 15:16:22 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:04.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.379 --rc genhtml_branch_coverage=1 00:04:04.379 --rc genhtml_function_coverage=1 00:04:04.379 --rc genhtml_legend=1 00:04:04.379 --rc geninfo_all_blocks=1 00:04:04.379 --rc geninfo_unexecuted_blocks=1 00:04:04.379 00:04:04.379 ' 00:04:04.379 15:16:22 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:04.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.379 --rc genhtml_branch_coverage=1 00:04:04.379 --rc genhtml_function_coverage=1 00:04:04.379 --rc genhtml_legend=1 00:04:04.379 --rc geninfo_all_blocks=1 00:04:04.379 --rc geninfo_unexecuted_blocks=1 00:04:04.379 00:04:04.379 ' 00:04:04.379 15:16:22 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:04.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.379 --rc genhtml_branch_coverage=1 00:04:04.379 --rc genhtml_function_coverage=1 00:04:04.379 --rc genhtml_legend=1 00:04:04.379 --rc geninfo_all_blocks=1 00:04:04.379 --rc geninfo_unexecuted_blocks=1 00:04:04.379 00:04:04.379 ' 00:04:04.379 15:16:22 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:04.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.379 --rc genhtml_branch_coverage=1 00:04:04.379 --rc genhtml_function_coverage=1 00:04:04.379 --rc genhtml_legend=1 00:04:04.379 --rc geninfo_all_blocks=1 00:04:04.379 --rc geninfo_unexecuted_blocks=1 00:04:04.379 00:04:04.379 ' 00:04:04.379 15:16:22 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:04.379 15:16:22 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:04.379 15:16:22 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.379 15:16:22 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.379 15:16:22 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.379 15:16:22 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:04.379 15:16:22 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:04.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:04.379 15:16:22 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:04.379 15:16:22 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:04.379 15:16:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:04.379 15:16:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:04.379 15:16:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:04.379 15:16:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:04.379 15:16:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:04.379 15:16:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:04.379 15:16:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:04.379 15:16:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:04.379 15:16:22 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:04.379 15:16:22 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:04.379 INFO: launching applications... 00:04:04.379 15:16:22 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:04.379 15:16:22 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:04.379 15:16:22 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:04.379 15:16:22 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:04.379 15:16:22 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:04.379 15:16:22 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:04.379 15:16:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.379 15:16:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.379 15:16:22 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3538075 00:04:04.379 15:16:22 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:04.379 Waiting for target to run... 00:04:04.380 15:16:22 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3538075 /var/tmp/spdk_tgt.sock 00:04:04.380 15:16:22 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 3538075 ']' 00:04:04.380 15:16:22 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:04.380 15:16:22 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:04.380 15:16:22 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:04.380 15:16:22 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:04.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:04.380 15:16:22 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:04.380 15:16:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:04.380 [2024-11-06 15:16:22.323446] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:04.380 [2024-11-06 15:16:22.323517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3538075 ] 00:04:04.951 [2024-11-06 15:16:22.657151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.951 [2024-11-06 15:16:22.686644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.211 15:16:23 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:05.211 15:16:23 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:05.211 15:16:23 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:05.211 00:04:05.211 15:16:23 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:05.211 INFO: shutting down applications... 00:04:05.211 15:16:23 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:05.211 15:16:23 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:05.211 15:16:23 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:05.211 15:16:23 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3538075 ]] 00:04:05.211 15:16:23 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3538075 00:04:05.211 15:16:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:05.211 15:16:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:05.211 15:16:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3538075 00:04:05.211 15:16:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:05.781 15:16:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:05.781 15:16:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:05.781 15:16:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3538075 00:04:05.781 15:16:23 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:05.781 15:16:23 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:05.781 15:16:23 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:05.781 15:16:23 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:05.781 SPDK target shutdown done 00:04:05.781 15:16:23 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:05.781 Success 00:04:05.781 00:04:05.781 real 0m1.573s 00:04:05.781 user 0m1.144s 00:04:05.781 sys 0m0.457s 00:04:05.781 15:16:23 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:05.781 15:16:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:05.781 ************************************ 00:04:05.781 END TEST json_config_extra_key 00:04:05.781 ************************************ 00:04:05.781 15:16:23 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:05.781 15:16:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:05.781 15:16:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:05.781 15:16:23 -- common/autotest_common.sh@10 -- # set +x 00:04:05.781 ************************************ 00:04:05.781 START TEST alias_rpc 00:04:05.781 ************************************ 00:04:05.781 15:16:23 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:06.042 * Looking for test storage... 00:04:06.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:06.042 15:16:23 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:06.042 15:16:23 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:06.042 15:16:23 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:06.042 15:16:23 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.042 15:16:23 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:06.042 15:16:23 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.042 15:16:23 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:06.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.042 --rc genhtml_branch_coverage=1 00:04:06.042 --rc genhtml_function_coverage=1 00:04:06.042 --rc genhtml_legend=1 00:04:06.042 --rc geninfo_all_blocks=1 00:04:06.042 --rc geninfo_unexecuted_blocks=1 00:04:06.042 00:04:06.042 ' 00:04:06.042 15:16:23 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:06.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.042 --rc genhtml_branch_coverage=1 00:04:06.042 --rc genhtml_function_coverage=1 00:04:06.042 --rc genhtml_legend=1 00:04:06.042 --rc geninfo_all_blocks=1 00:04:06.042 --rc geninfo_unexecuted_blocks=1 00:04:06.042 00:04:06.042 ' 00:04:06.042 15:16:23 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:06.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.042 --rc genhtml_branch_coverage=1 00:04:06.042 --rc genhtml_function_coverage=1 00:04:06.042 --rc genhtml_legend=1 00:04:06.042 --rc geninfo_all_blocks=1 00:04:06.042 --rc geninfo_unexecuted_blocks=1 00:04:06.042 00:04:06.042 ' 00:04:06.042 15:16:23 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:06.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.042 --rc genhtml_branch_coverage=1 00:04:06.042 --rc genhtml_function_coverage=1 00:04:06.042 --rc genhtml_legend=1 00:04:06.042 --rc geninfo_all_blocks=1 00:04:06.042 --rc geninfo_unexecuted_blocks=1 00:04:06.042 00:04:06.042 ' 00:04:06.042 15:16:23 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:06.042 15:16:23 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3538445 00:04:06.042 15:16:23 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3538445 00:04:06.042 15:16:23 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.042 15:16:23 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 3538445 ']' 00:04:06.042 15:16:23 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.042 15:16:23 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:06.042 15:16:23 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.042 15:16:23 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:06.042 15:16:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.042 [2024-11-06 15:16:23.969956] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:06.042 [2024-11-06 15:16:23.970031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3538445 ] 00:04:06.302 [2024-11-06 15:16:24.054963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.302 [2024-11-06 15:16:24.086607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.872 15:16:24 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:06.872 15:16:24 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:06.872 15:16:24 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:07.132 15:16:24 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3538445 00:04:07.132 15:16:24 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 3538445 ']' 00:04:07.132 15:16:24 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 3538445 00:04:07.132 15:16:24 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:07.132 15:16:24 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:07.132 15:16:24 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3538445 00:04:07.132 15:16:25 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:07.132 15:16:25 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:07.132 15:16:25 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3538445' 00:04:07.132 killing process with pid 3538445 00:04:07.132 15:16:25 alias_rpc -- common/autotest_common.sh@971 -- # kill 3538445 00:04:07.132 15:16:25 alias_rpc -- common/autotest_common.sh@976 -- # wait 3538445 00:04:07.393 00:04:07.393 real 0m1.526s 00:04:07.393 user 0m1.689s 00:04:07.393 sys 0m0.430s 00:04:07.393 15:16:25 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:07.393 15:16:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.393 ************************************ 00:04:07.393 END TEST alias_rpc 00:04:07.393 ************************************ 00:04:07.393 15:16:25 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:07.393 15:16:25 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:07.393 15:16:25 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:07.393 15:16:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:07.393 15:16:25 -- common/autotest_common.sh@10 -- # set +x 00:04:07.393 ************************************ 00:04:07.393 START TEST spdkcli_tcp 00:04:07.393 ************************************ 00:04:07.393 15:16:25 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:07.653 * Looking for test storage... 00:04:07.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:07.653 15:16:25 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:07.653 15:16:25 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:07.653 15:16:25 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:07.653 15:16:25 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.653 15:16:25 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:07.654 15:16:25 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.654 15:16:25 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.654 15:16:25 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.654 15:16:25 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:07.654 15:16:25 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.654 15:16:25 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:07.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.654 --rc genhtml_branch_coverage=1 00:04:07.654 --rc genhtml_function_coverage=1 00:04:07.654 --rc genhtml_legend=1 00:04:07.654 --rc geninfo_all_blocks=1 00:04:07.654 --rc geninfo_unexecuted_blocks=1 00:04:07.654 00:04:07.654 ' 00:04:07.654 15:16:25 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:07.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.654 --rc genhtml_branch_coverage=1 00:04:07.654 --rc genhtml_function_coverage=1 00:04:07.654 --rc genhtml_legend=1 00:04:07.654 --rc geninfo_all_blocks=1 00:04:07.654 --rc geninfo_unexecuted_blocks=1 00:04:07.654 00:04:07.654 ' 00:04:07.654 15:16:25 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:07.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.654 --rc genhtml_branch_coverage=1 00:04:07.654 --rc genhtml_function_coverage=1 00:04:07.654 --rc genhtml_legend=1 00:04:07.654 --rc geninfo_all_blocks=1 00:04:07.654 --rc geninfo_unexecuted_blocks=1 00:04:07.654 00:04:07.654 ' 00:04:07.654 15:16:25 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:07.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.654 --rc genhtml_branch_coverage=1 00:04:07.654 --rc genhtml_function_coverage=1 00:04:07.654 --rc genhtml_legend=1 00:04:07.654 --rc geninfo_all_blocks=1 00:04:07.654 --rc geninfo_unexecuted_blocks=1 00:04:07.654 00:04:07.654 ' 00:04:07.654 15:16:25 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:07.654 15:16:25 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:07.654 15:16:25 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:07.654 15:16:25 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:07.654 15:16:25 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:07.654 15:16:25 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:07.654 15:16:25 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:07.654 15:16:25 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:07.654 15:16:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:07.654 15:16:25 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3538790 00:04:07.654 15:16:25 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3538790 00:04:07.654 15:16:25 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:07.654 15:16:25 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 3538790 ']' 00:04:07.654 15:16:25 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.654 15:16:25 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:07.654 15:16:25 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.654 15:16:25 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:07.654 15:16:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:07.654 [2024-11-06 15:16:25.576843] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:07.654 [2024-11-06 15:16:25.576921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3538790 ] 00:04:07.914 [2024-11-06 15:16:25.663558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:07.914 [2024-11-06 15:16:25.699135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:07.914 [2024-11-06 15:16:25.699137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.484 15:16:26 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:08.484 15:16:26 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:08.484 15:16:26 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3538882 00:04:08.484 15:16:26 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:08.484 15:16:26 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:08.745 [ 00:04:08.745 "bdev_malloc_delete", 00:04:08.745 "bdev_malloc_create", 00:04:08.745 "bdev_null_resize", 00:04:08.745 "bdev_null_delete", 00:04:08.745 "bdev_null_create", 00:04:08.745 "bdev_nvme_cuse_unregister", 00:04:08.745 "bdev_nvme_cuse_register", 00:04:08.745 "bdev_opal_new_user", 00:04:08.745 "bdev_opal_set_lock_state", 00:04:08.745 "bdev_opal_delete", 00:04:08.745 "bdev_opal_get_info", 00:04:08.745 "bdev_opal_create", 00:04:08.745 "bdev_nvme_opal_revert", 00:04:08.745 "bdev_nvme_opal_init", 00:04:08.745 "bdev_nvme_send_cmd", 00:04:08.745 "bdev_nvme_set_keys", 00:04:08.745 "bdev_nvme_get_path_iostat", 00:04:08.745 "bdev_nvme_get_mdns_discovery_info", 00:04:08.745 "bdev_nvme_stop_mdns_discovery", 00:04:08.745 "bdev_nvme_start_mdns_discovery", 00:04:08.745 "bdev_nvme_set_multipath_policy", 00:04:08.745 "bdev_nvme_set_preferred_path", 00:04:08.745 "bdev_nvme_get_io_paths", 00:04:08.745 "bdev_nvme_remove_error_injection", 00:04:08.745 "bdev_nvme_add_error_injection", 00:04:08.745 "bdev_nvme_get_discovery_info", 00:04:08.745 "bdev_nvme_stop_discovery", 00:04:08.745 "bdev_nvme_start_discovery", 00:04:08.745 "bdev_nvme_get_controller_health_info", 00:04:08.745 "bdev_nvme_disable_controller", 00:04:08.745 "bdev_nvme_enable_controller", 00:04:08.745 "bdev_nvme_reset_controller", 00:04:08.745 "bdev_nvme_get_transport_statistics", 00:04:08.745 "bdev_nvme_apply_firmware", 00:04:08.745 "bdev_nvme_detach_controller", 00:04:08.745 "bdev_nvme_get_controllers", 00:04:08.745 "bdev_nvme_attach_controller", 00:04:08.745 "bdev_nvme_set_hotplug", 00:04:08.745 "bdev_nvme_set_options", 00:04:08.745 "bdev_passthru_delete", 00:04:08.745 "bdev_passthru_create", 00:04:08.745 "bdev_lvol_set_parent_bdev", 00:04:08.745 "bdev_lvol_set_parent", 00:04:08.745 "bdev_lvol_check_shallow_copy", 00:04:08.745 "bdev_lvol_start_shallow_copy", 00:04:08.745 "bdev_lvol_grow_lvstore", 00:04:08.745 "bdev_lvol_get_lvols", 00:04:08.745 "bdev_lvol_get_lvstores", 00:04:08.745 "bdev_lvol_delete", 00:04:08.745 "bdev_lvol_set_read_only", 00:04:08.745 "bdev_lvol_resize", 00:04:08.745 "bdev_lvol_decouple_parent", 00:04:08.745 "bdev_lvol_inflate", 00:04:08.745 "bdev_lvol_rename", 00:04:08.745 "bdev_lvol_clone_bdev", 00:04:08.745 "bdev_lvol_clone", 00:04:08.745 "bdev_lvol_snapshot", 00:04:08.745 "bdev_lvol_create", 00:04:08.745 "bdev_lvol_delete_lvstore", 00:04:08.745 "bdev_lvol_rename_lvstore", 00:04:08.745 "bdev_lvol_create_lvstore", 00:04:08.745 "bdev_raid_set_options", 00:04:08.745 "bdev_raid_remove_base_bdev", 00:04:08.745 "bdev_raid_add_base_bdev", 00:04:08.745 "bdev_raid_delete", 00:04:08.745 "bdev_raid_create", 00:04:08.745 "bdev_raid_get_bdevs", 00:04:08.745 "bdev_error_inject_error", 00:04:08.745 "bdev_error_delete", 00:04:08.745 "bdev_error_create", 00:04:08.745 "bdev_split_delete", 00:04:08.745 "bdev_split_create", 00:04:08.745 "bdev_delay_delete", 00:04:08.745 "bdev_delay_create", 00:04:08.745 "bdev_delay_update_latency", 00:04:08.745 "bdev_zone_block_delete", 00:04:08.745 "bdev_zone_block_create", 00:04:08.745 "blobfs_create", 00:04:08.745 "blobfs_detect", 00:04:08.745 "blobfs_set_cache_size", 00:04:08.745 "bdev_aio_delete", 00:04:08.745 "bdev_aio_rescan", 00:04:08.745 "bdev_aio_create", 00:04:08.745 "bdev_ftl_set_property", 00:04:08.745 "bdev_ftl_get_properties", 00:04:08.745 "bdev_ftl_get_stats", 00:04:08.745 "bdev_ftl_unmap", 00:04:08.745 "bdev_ftl_unload", 00:04:08.745 "bdev_ftl_delete", 00:04:08.745 "bdev_ftl_load", 00:04:08.745 "bdev_ftl_create", 00:04:08.745 "bdev_virtio_attach_controller", 00:04:08.745 "bdev_virtio_scsi_get_devices", 00:04:08.745 "bdev_virtio_detach_controller", 00:04:08.745 "bdev_virtio_blk_set_hotplug", 00:04:08.745 "bdev_iscsi_delete", 00:04:08.745 "bdev_iscsi_create", 00:04:08.745 "bdev_iscsi_set_options", 00:04:08.745 "accel_error_inject_error", 00:04:08.745 "ioat_scan_accel_module", 00:04:08.745 "dsa_scan_accel_module", 00:04:08.745 "iaa_scan_accel_module", 00:04:08.745 "vfu_virtio_create_fs_endpoint", 00:04:08.745 "vfu_virtio_create_scsi_endpoint", 00:04:08.745 "vfu_virtio_scsi_remove_target", 00:04:08.745 "vfu_virtio_scsi_add_target", 00:04:08.745 "vfu_virtio_create_blk_endpoint", 00:04:08.745 "vfu_virtio_delete_endpoint", 00:04:08.745 "keyring_file_remove_key", 00:04:08.745 "keyring_file_add_key", 00:04:08.745 "keyring_linux_set_options", 00:04:08.745 "fsdev_aio_delete", 00:04:08.745 "fsdev_aio_create", 00:04:08.745 "iscsi_get_histogram", 00:04:08.745 "iscsi_enable_histogram", 00:04:08.745 "iscsi_set_options", 00:04:08.745 "iscsi_get_auth_groups", 00:04:08.745 "iscsi_auth_group_remove_secret", 00:04:08.745 "iscsi_auth_group_add_secret", 00:04:08.745 "iscsi_delete_auth_group", 00:04:08.745 "iscsi_create_auth_group", 00:04:08.745 "iscsi_set_discovery_auth", 00:04:08.745 "iscsi_get_options", 00:04:08.745 "iscsi_target_node_request_logout", 00:04:08.745 "iscsi_target_node_set_redirect", 00:04:08.745 "iscsi_target_node_set_auth", 00:04:08.745 "iscsi_target_node_add_lun", 00:04:08.745 "iscsi_get_stats", 00:04:08.745 "iscsi_get_connections", 00:04:08.745 "iscsi_portal_group_set_auth", 00:04:08.745 "iscsi_start_portal_group", 00:04:08.745 "iscsi_delete_portal_group", 00:04:08.745 "iscsi_create_portal_group", 00:04:08.745 "iscsi_get_portal_groups", 00:04:08.745 "iscsi_delete_target_node", 00:04:08.745 "iscsi_target_node_remove_pg_ig_maps", 00:04:08.745 "iscsi_target_node_add_pg_ig_maps", 00:04:08.745 "iscsi_create_target_node", 00:04:08.745 "iscsi_get_target_nodes", 00:04:08.745 "iscsi_delete_initiator_group", 00:04:08.745 "iscsi_initiator_group_remove_initiators", 00:04:08.745 "iscsi_initiator_group_add_initiators", 00:04:08.745 "iscsi_create_initiator_group", 00:04:08.745 "iscsi_get_initiator_groups", 00:04:08.745 "nvmf_set_crdt", 00:04:08.745 "nvmf_set_config", 00:04:08.745 "nvmf_set_max_subsystems", 00:04:08.745 "nvmf_stop_mdns_prr", 00:04:08.745 "nvmf_publish_mdns_prr", 00:04:08.745 "nvmf_subsystem_get_listeners", 00:04:08.745 "nvmf_subsystem_get_qpairs", 00:04:08.745 "nvmf_subsystem_get_controllers", 00:04:08.745 "nvmf_get_stats", 00:04:08.745 "nvmf_get_transports", 00:04:08.745 "nvmf_create_transport", 00:04:08.745 "nvmf_get_targets", 00:04:08.745 "nvmf_delete_target", 00:04:08.745 "nvmf_create_target", 00:04:08.745 "nvmf_subsystem_allow_any_host", 00:04:08.745 "nvmf_subsystem_set_keys", 00:04:08.745 "nvmf_subsystem_remove_host", 00:04:08.745 "nvmf_subsystem_add_host", 00:04:08.745 "nvmf_ns_remove_host", 00:04:08.745 "nvmf_ns_add_host", 00:04:08.745 "nvmf_subsystem_remove_ns", 00:04:08.745 "nvmf_subsystem_set_ns_ana_group", 00:04:08.745 "nvmf_subsystem_add_ns", 00:04:08.745 "nvmf_subsystem_listener_set_ana_state", 00:04:08.745 "nvmf_discovery_get_referrals", 00:04:08.745 "nvmf_discovery_remove_referral", 00:04:08.745 "nvmf_discovery_add_referral", 00:04:08.745 "nvmf_subsystem_remove_listener", 00:04:08.745 "nvmf_subsystem_add_listener", 00:04:08.745 "nvmf_delete_subsystem", 00:04:08.745 "nvmf_create_subsystem", 00:04:08.745 "nvmf_get_subsystems", 00:04:08.745 "env_dpdk_get_mem_stats", 00:04:08.745 "nbd_get_disks", 00:04:08.745 "nbd_stop_disk", 00:04:08.745 "nbd_start_disk", 00:04:08.745 "ublk_recover_disk", 00:04:08.745 "ublk_get_disks", 00:04:08.746 "ublk_stop_disk", 00:04:08.746 "ublk_start_disk", 00:04:08.746 "ublk_destroy_target", 00:04:08.746 "ublk_create_target", 00:04:08.746 "virtio_blk_create_transport", 00:04:08.746 "virtio_blk_get_transports", 00:04:08.746 "vhost_controller_set_coalescing", 00:04:08.746 "vhost_get_controllers", 00:04:08.746 "vhost_delete_controller", 00:04:08.746 "vhost_create_blk_controller", 00:04:08.746 "vhost_scsi_controller_remove_target", 00:04:08.746 "vhost_scsi_controller_add_target", 00:04:08.746 "vhost_start_scsi_controller", 00:04:08.746 "vhost_create_scsi_controller", 00:04:08.746 "thread_set_cpumask", 00:04:08.746 "scheduler_set_options", 00:04:08.746 "framework_get_governor", 00:04:08.746 "framework_get_scheduler", 00:04:08.746 "framework_set_scheduler", 00:04:08.746 "framework_get_reactors", 00:04:08.746 "thread_get_io_channels", 00:04:08.746 "thread_get_pollers", 00:04:08.746 "thread_get_stats", 00:04:08.746 "framework_monitor_context_switch", 00:04:08.746 "spdk_kill_instance", 00:04:08.746 "log_enable_timestamps", 00:04:08.746 "log_get_flags", 00:04:08.746 "log_clear_flag", 00:04:08.746 "log_set_flag", 00:04:08.746 "log_get_level", 00:04:08.746 "log_set_level", 00:04:08.746 "log_get_print_level", 00:04:08.746 "log_set_print_level", 00:04:08.746 "framework_enable_cpumask_locks", 00:04:08.746 "framework_disable_cpumask_locks", 00:04:08.746 "framework_wait_init", 00:04:08.746 "framework_start_init", 00:04:08.746 "scsi_get_devices", 00:04:08.746 "bdev_get_histogram", 00:04:08.746 "bdev_enable_histogram", 00:04:08.746 "bdev_set_qos_limit", 00:04:08.746 "bdev_set_qd_sampling_period", 00:04:08.746 "bdev_get_bdevs", 00:04:08.746 "bdev_reset_iostat", 00:04:08.746 "bdev_get_iostat", 00:04:08.746 "bdev_examine", 00:04:08.746 "bdev_wait_for_examine", 00:04:08.746 "bdev_set_options", 00:04:08.746 "accel_get_stats", 00:04:08.746 "accel_set_options", 00:04:08.746 "accel_set_driver", 00:04:08.746 "accel_crypto_key_destroy", 00:04:08.746 "accel_crypto_keys_get", 00:04:08.746 "accel_crypto_key_create", 00:04:08.746 "accel_assign_opc", 00:04:08.746 "accel_get_module_info", 00:04:08.746 "accel_get_opc_assignments", 00:04:08.746 "vmd_rescan", 00:04:08.746 "vmd_remove_device", 00:04:08.746 "vmd_enable", 00:04:08.746 "sock_get_default_impl", 00:04:08.746 "sock_set_default_impl", 00:04:08.746 "sock_impl_set_options", 00:04:08.746 "sock_impl_get_options", 00:04:08.746 "iobuf_get_stats", 00:04:08.746 "iobuf_set_options", 00:04:08.746 "keyring_get_keys", 00:04:08.746 "vfu_tgt_set_base_path", 00:04:08.746 "framework_get_pci_devices", 00:04:08.746 "framework_get_config", 00:04:08.746 "framework_get_subsystems", 00:04:08.746 "fsdev_set_opts", 00:04:08.746 "fsdev_get_opts", 00:04:08.746 "trace_get_info", 00:04:08.746 "trace_get_tpoint_group_mask", 00:04:08.746 "trace_disable_tpoint_group", 00:04:08.746 "trace_enable_tpoint_group", 00:04:08.746 "trace_clear_tpoint_mask", 00:04:08.746 "trace_set_tpoint_mask", 00:04:08.746 "notify_get_notifications", 00:04:08.746 "notify_get_types", 00:04:08.746 "spdk_get_version", 00:04:08.746 "rpc_get_methods" 00:04:08.746 ] 00:04:08.746 15:16:26 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:08.746 15:16:26 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:08.746 15:16:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:08.746 15:16:26 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:08.746 15:16:26 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3538790 00:04:08.746 15:16:26 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 3538790 ']' 00:04:08.746 15:16:26 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 3538790 00:04:08.746 15:16:26 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:08.746 15:16:26 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:08.746 15:16:26 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3538790 00:04:08.746 15:16:26 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:08.746 15:16:26 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:08.746 15:16:26 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3538790' 00:04:08.746 killing process with pid 3538790 00:04:08.746 15:16:26 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 3538790 00:04:08.746 15:16:26 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 3538790 00:04:09.007 00:04:09.007 real 0m1.520s 00:04:09.007 user 0m2.753s 00:04:09.007 sys 0m0.468s 00:04:09.007 15:16:26 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:09.007 15:16:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:09.007 ************************************ 00:04:09.007 END TEST spdkcli_tcp 00:04:09.007 ************************************ 00:04:09.007 15:16:26 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:09.007 15:16:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:09.007 15:16:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:09.007 15:16:26 -- common/autotest_common.sh@10 -- # set +x 00:04:09.007 ************************************ 00:04:09.007 START TEST dpdk_mem_utility 00:04:09.007 ************************************ 00:04:09.007 15:16:26 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:09.268 * Looking for test storage... 00:04:09.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:09.268 15:16:27 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:09.268 15:16:27 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:09.268 15:16:27 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:09.268 15:16:27 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.268 15:16:27 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:09.268 15:16:27 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.268 15:16:27 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:09.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.268 --rc genhtml_branch_coverage=1 00:04:09.268 --rc genhtml_function_coverage=1 00:04:09.268 --rc genhtml_legend=1 00:04:09.268 --rc geninfo_all_blocks=1 00:04:09.268 --rc geninfo_unexecuted_blocks=1 00:04:09.268 00:04:09.268 ' 00:04:09.268 15:16:27 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:09.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.268 --rc genhtml_branch_coverage=1 00:04:09.268 --rc genhtml_function_coverage=1 00:04:09.268 --rc genhtml_legend=1 00:04:09.268 --rc geninfo_all_blocks=1 00:04:09.268 --rc geninfo_unexecuted_blocks=1 00:04:09.268 00:04:09.268 ' 00:04:09.268 15:16:27 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:09.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.268 --rc genhtml_branch_coverage=1 00:04:09.268 --rc genhtml_function_coverage=1 00:04:09.268 --rc genhtml_legend=1 00:04:09.268 --rc geninfo_all_blocks=1 00:04:09.268 --rc geninfo_unexecuted_blocks=1 00:04:09.268 00:04:09.268 ' 00:04:09.268 15:16:27 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:09.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.268 --rc genhtml_branch_coverage=1 00:04:09.268 --rc genhtml_function_coverage=1 00:04:09.268 --rc genhtml_legend=1 00:04:09.268 --rc geninfo_all_blocks=1 00:04:09.268 --rc geninfo_unexecuted_blocks=1 00:04:09.268 00:04:09.268 ' 00:04:09.268 15:16:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:09.268 15:16:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3539135 00:04:09.268 15:16:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3539135 00:04:09.268 15:16:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.268 15:16:27 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 3539135 ']' 00:04:09.268 15:16:27 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.268 15:16:27 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:09.268 15:16:27 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.268 15:16:27 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:09.268 15:16:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:09.268 [2024-11-06 15:16:27.162175] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:09.268 [2024-11-06 15:16:27.162248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3539135 ] 00:04:09.528 [2024-11-06 15:16:27.251309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.528 [2024-11-06 15:16:27.291976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.099 15:16:27 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:10.099 15:16:27 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:10.099 15:16:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:10.099 15:16:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:10.099 15:16:27 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.099 15:16:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:10.099 { 00:04:10.099 "filename": "/tmp/spdk_mem_dump.txt" 00:04:10.099 } 00:04:10.099 15:16:27 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.099 15:16:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:10.099 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:10.099 1 heaps totaling size 818.000000 MiB 00:04:10.099 size: 818.000000 MiB heap id: 0 00:04:10.099 end heaps---------- 00:04:10.099 9 mempools totaling size 603.782043 MiB 00:04:10.099 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:10.099 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:10.099 size: 100.555481 MiB name: bdev_io_3539135 00:04:10.099 size: 50.003479 MiB name: msgpool_3539135 00:04:10.099 size: 36.509338 MiB name: fsdev_io_3539135 00:04:10.099 size: 21.763794 MiB name: PDU_Pool 00:04:10.099 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:10.099 size: 4.133484 MiB name: evtpool_3539135 00:04:10.099 size: 0.026123 MiB name: Session_Pool 00:04:10.099 end mempools------- 00:04:10.099 6 memzones totaling size 4.142822 MiB 00:04:10.099 size: 1.000366 MiB name: RG_ring_0_3539135 00:04:10.099 size: 1.000366 MiB name: RG_ring_1_3539135 00:04:10.099 size: 1.000366 MiB name: RG_ring_4_3539135 00:04:10.099 size: 1.000366 MiB name: RG_ring_5_3539135 00:04:10.099 size: 0.125366 MiB name: RG_ring_2_3539135 00:04:10.099 size: 0.015991 MiB name: RG_ring_3_3539135 00:04:10.099 end memzones------- 00:04:10.099 15:16:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:10.099 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:10.099 list of free elements. size: 10.852478 MiB 00:04:10.099 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:10.099 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:10.099 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:10.099 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:10.099 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:10.099 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:10.099 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:10.099 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:10.099 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:10.099 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:10.099 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:10.099 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:10.099 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:10.099 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:10.099 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:10.099 list of standard malloc elements. size: 199.218628 MiB 00:04:10.099 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:10.099 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:10.099 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:10.099 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:10.099 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:10.099 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:10.099 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:10.099 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:10.099 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:10.099 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:10.099 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:10.099 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:10.099 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:10.099 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:10.099 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:10.099 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:10.099 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:10.099 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:10.099 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:10.099 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:10.099 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:10.099 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:10.099 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:10.099 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:10.099 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:10.099 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:10.099 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:10.099 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:10.099 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:10.099 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:10.099 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:10.099 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:10.099 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:10.099 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:10.099 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:10.099 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:10.099 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:10.099 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:10.099 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:10.099 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:10.099 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:10.099 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:10.099 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:10.099 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:10.099 list of memzone associated elements. size: 607.928894 MiB 00:04:10.099 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:10.099 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:10.099 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:10.100 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:10.100 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:10.100 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3539135_0 00:04:10.100 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:10.100 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3539135_0 00:04:10.100 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:10.100 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3539135_0 00:04:10.100 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:10.100 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:10.100 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:10.100 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:10.100 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:10.100 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3539135_0 00:04:10.100 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:10.100 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3539135 00:04:10.100 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:10.100 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3539135 00:04:10.100 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:10.100 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:10.100 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:10.100 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:10.100 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:10.100 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:10.100 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:10.100 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:10.100 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:10.100 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3539135 00:04:10.100 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:10.100 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3539135 00:04:10.100 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:10.100 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3539135 00:04:10.100 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:10.100 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3539135 00:04:10.100 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:10.100 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3539135 00:04:10.100 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:10.100 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3539135 00:04:10.100 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:10.100 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:10.100 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:10.100 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:10.100 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:10.100 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:10.100 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:10.100 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3539135 00:04:10.100 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:10.100 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3539135 00:04:10.100 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:10.100 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:10.100 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:10.100 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:10.100 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:10.100 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3539135 00:04:10.100 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:10.100 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:10.100 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:10.100 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3539135 00:04:10.100 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:10.100 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3539135 00:04:10.100 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:10.100 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3539135 00:04:10.100 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:10.100 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:10.100 15:16:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:10.100 15:16:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3539135 00:04:10.100 15:16:28 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 3539135 ']' 00:04:10.100 15:16:28 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 3539135 00:04:10.100 15:16:28 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:10.100 15:16:28 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:10.360 15:16:28 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3539135 00:04:10.360 15:16:28 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:10.360 15:16:28 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:10.360 15:16:28 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3539135' 00:04:10.360 killing process with pid 3539135 00:04:10.360 15:16:28 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 3539135 00:04:10.360 15:16:28 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 3539135 00:04:10.360 00:04:10.360 real 0m1.420s 00:04:10.360 user 0m1.490s 00:04:10.360 sys 0m0.437s 00:04:10.360 15:16:28 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:10.360 15:16:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:10.360 ************************************ 00:04:10.360 END TEST dpdk_mem_utility 00:04:10.360 ************************************ 00:04:10.627 15:16:28 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:10.627 15:16:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:10.627 15:16:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:10.627 15:16:28 -- common/autotest_common.sh@10 -- # set +x 00:04:10.627 ************************************ 00:04:10.627 START TEST event 00:04:10.627 ************************************ 00:04:10.627 15:16:28 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:10.627 * Looking for test storage... 00:04:10.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:10.627 15:16:28 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:10.627 15:16:28 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:10.627 15:16:28 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:10.627 15:16:28 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:10.627 15:16:28 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.627 15:16:28 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.627 15:16:28 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.627 15:16:28 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.627 15:16:28 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.627 15:16:28 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.627 15:16:28 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.627 15:16:28 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.627 15:16:28 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.627 15:16:28 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.627 15:16:28 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.627 15:16:28 event -- scripts/common.sh@344 -- # case "$op" in 00:04:10.627 15:16:28 event -- scripts/common.sh@345 -- # : 1 00:04:10.627 15:16:28 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.627 15:16:28 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.627 15:16:28 event -- scripts/common.sh@365 -- # decimal 1 00:04:10.627 15:16:28 event -- scripts/common.sh@353 -- # local d=1 00:04:10.627 15:16:28 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.627 15:16:28 event -- scripts/common.sh@355 -- # echo 1 00:04:10.627 15:16:28 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.627 15:16:28 event -- scripts/common.sh@366 -- # decimal 2 00:04:10.627 15:16:28 event -- scripts/common.sh@353 -- # local d=2 00:04:10.627 15:16:28 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.627 15:16:28 event -- scripts/common.sh@355 -- # echo 2 00:04:10.627 15:16:28 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.627 15:16:28 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.627 15:16:28 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.627 15:16:28 event -- scripts/common.sh@368 -- # return 0 00:04:10.628 15:16:28 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.628 15:16:28 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:10.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.628 --rc genhtml_branch_coverage=1 00:04:10.628 --rc genhtml_function_coverage=1 00:04:10.628 --rc genhtml_legend=1 00:04:10.628 --rc geninfo_all_blocks=1 00:04:10.628 --rc geninfo_unexecuted_blocks=1 00:04:10.628 00:04:10.628 ' 00:04:10.628 15:16:28 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:10.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.628 --rc genhtml_branch_coverage=1 00:04:10.628 --rc genhtml_function_coverage=1 00:04:10.628 --rc genhtml_legend=1 00:04:10.628 --rc geninfo_all_blocks=1 00:04:10.628 --rc geninfo_unexecuted_blocks=1 00:04:10.628 00:04:10.628 ' 00:04:10.628 15:16:28 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:10.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.628 --rc genhtml_branch_coverage=1 00:04:10.628 --rc genhtml_function_coverage=1 00:04:10.628 --rc genhtml_legend=1 00:04:10.628 --rc geninfo_all_blocks=1 00:04:10.628 --rc geninfo_unexecuted_blocks=1 00:04:10.628 00:04:10.628 ' 00:04:10.628 15:16:28 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:10.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.628 --rc genhtml_branch_coverage=1 00:04:10.628 --rc genhtml_function_coverage=1 00:04:10.628 --rc genhtml_legend=1 00:04:10.628 --rc geninfo_all_blocks=1 00:04:10.628 --rc geninfo_unexecuted_blocks=1 00:04:10.628 00:04:10.628 ' 00:04:10.628 15:16:28 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:10.628 15:16:28 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:10.628 15:16:28 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:10.628 15:16:28 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:10.628 15:16:28 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:10.628 15:16:28 event -- common/autotest_common.sh@10 -- # set +x 00:04:10.897 ************************************ 00:04:10.897 START TEST event_perf 00:04:10.897 ************************************ 00:04:10.897 15:16:28 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:10.897 Running I/O for 1 seconds...[2024-11-06 15:16:28.669767] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:10.897 [2024-11-06 15:16:28.669871] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3539434 ] 00:04:10.897 [2024-11-06 15:16:28.760907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:10.897 [2024-11-06 15:16:28.804387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.897 [2024-11-06 15:16:28.804542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:10.897 [2024-11-06 15:16:28.805022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:10.897 [2024-11-06 15:16:28.805095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.941 Running I/O for 1 seconds... 00:04:11.941 lcore 0: 179604 00:04:11.941 lcore 1: 179606 00:04:11.941 lcore 2: 179609 00:04:11.941 lcore 3: 179608 00:04:11.941 done. 00:04:11.941 00:04:11.941 real 0m1.186s 00:04:11.941 user 0m4.102s 00:04:11.941 sys 0m0.081s 00:04:11.941 15:16:29 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:11.941 15:16:29 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:11.941 ************************************ 00:04:11.941 END TEST event_perf 00:04:11.941 ************************************ 00:04:11.941 15:16:29 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:11.941 15:16:29 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:11.941 15:16:29 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:11.941 15:16:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:11.941 ************************************ 00:04:11.941 START TEST event_reactor 00:04:11.941 ************************************ 00:04:11.941 15:16:29 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:12.233 [2024-11-06 15:16:29.925994] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:12.233 [2024-11-06 15:16:29.926087] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3539725 ] 00:04:12.233 [2024-11-06 15:16:30.015016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.233 [2024-11-06 15:16:30.048022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.175 test_start 00:04:13.175 oneshot 00:04:13.175 tick 100 00:04:13.175 tick 100 00:04:13.175 tick 250 00:04:13.175 tick 100 00:04:13.175 tick 100 00:04:13.175 tick 100 00:04:13.175 tick 250 00:04:13.175 tick 500 00:04:13.175 tick 100 00:04:13.175 tick 100 00:04:13.175 tick 250 00:04:13.175 tick 100 00:04:13.175 tick 100 00:04:13.175 test_end 00:04:13.175 00:04:13.175 real 0m1.171s 00:04:13.175 user 0m1.085s 00:04:13.175 sys 0m0.082s 00:04:13.175 15:16:31 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:13.175 15:16:31 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:13.175 ************************************ 00:04:13.175 END TEST event_reactor 00:04:13.175 ************************************ 00:04:13.175 15:16:31 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:13.175 15:16:31 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:13.175 15:16:31 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:13.175 15:16:31 event -- common/autotest_common.sh@10 -- # set +x 00:04:13.175 ************************************ 00:04:13.175 START TEST event_reactor_perf 00:04:13.175 ************************************ 00:04:13.175 15:16:31 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:13.435 [2024-11-06 15:16:31.174113] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:13.435 [2024-11-06 15:16:31.174217] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3540082 ] 00:04:13.435 [2024-11-06 15:16:31.261227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.435 [2024-11-06 15:16:31.299203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.375 test_start 00:04:14.375 test_end 00:04:14.375 Performance: 538651 events per second 00:04:14.375 00:04:14.375 real 0m1.172s 00:04:14.375 user 0m1.092s 00:04:14.375 sys 0m0.076s 00:04:14.375 15:16:32 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:14.375 15:16:32 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:14.375 ************************************ 00:04:14.375 END TEST event_reactor_perf 00:04:14.375 ************************************ 00:04:14.637 15:16:32 event -- event/event.sh@49 -- # uname -s 00:04:14.637 15:16:32 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:14.637 15:16:32 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:14.637 15:16:32 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:14.637 15:16:32 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.637 15:16:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:14.637 ************************************ 00:04:14.637 START TEST event_scheduler 00:04:14.637 ************************************ 00:04:14.637 15:16:32 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:14.637 * Looking for test storage... 00:04:14.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:14.637 15:16:32 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:14.637 15:16:32 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:14.637 15:16:32 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:14.637 15:16:32 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.637 15:16:32 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:14.637 15:16:32 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.637 15:16:32 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:14.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.637 --rc genhtml_branch_coverage=1 00:04:14.637 --rc genhtml_function_coverage=1 00:04:14.637 --rc genhtml_legend=1 00:04:14.637 --rc geninfo_all_blocks=1 00:04:14.637 --rc geninfo_unexecuted_blocks=1 00:04:14.637 00:04:14.637 ' 00:04:14.637 15:16:32 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:14.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.637 --rc genhtml_branch_coverage=1 00:04:14.637 --rc genhtml_function_coverage=1 00:04:14.637 --rc genhtml_legend=1 00:04:14.637 --rc geninfo_all_blocks=1 00:04:14.637 --rc geninfo_unexecuted_blocks=1 00:04:14.637 00:04:14.637 ' 00:04:14.637 15:16:32 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:14.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.637 --rc genhtml_branch_coverage=1 00:04:14.637 --rc genhtml_function_coverage=1 00:04:14.637 --rc genhtml_legend=1 00:04:14.637 --rc geninfo_all_blocks=1 00:04:14.637 --rc geninfo_unexecuted_blocks=1 00:04:14.637 00:04:14.637 ' 00:04:14.637 15:16:32 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:14.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.637 --rc genhtml_branch_coverage=1 00:04:14.637 --rc genhtml_function_coverage=1 00:04:14.637 --rc genhtml_legend=1 00:04:14.637 --rc geninfo_all_blocks=1 00:04:14.637 --rc geninfo_unexecuted_blocks=1 00:04:14.637 00:04:14.637 ' 00:04:14.637 15:16:32 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:14.637 15:16:32 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3540466 00:04:14.637 15:16:32 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.637 15:16:32 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3540466 00:04:14.637 15:16:32 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:14.637 15:16:32 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 3540466 ']' 00:04:14.637 15:16:32 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.637 15:16:32 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:14.637 15:16:32 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.637 15:16:32 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:14.637 15:16:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:14.897 [2024-11-06 15:16:32.672658] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:14.897 [2024-11-06 15:16:32.672732] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3540466 ] 00:04:14.897 [2024-11-06 15:16:32.764514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:14.897 [2024-11-06 15:16:32.820257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.897 [2024-11-06 15:16:32.820420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.897 [2024-11-06 15:16:32.820578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:14.897 [2024-11-06 15:16:32.820578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:15.839 15:16:33 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:15.839 15:16:33 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:15.839 15:16:33 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:15.839 15:16:33 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.839 15:16:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.839 [2024-11-06 15:16:33.486931] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:15.839 [2024-11-06 15:16:33.486949] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:15.839 [2024-11-06 15:16:33.486959] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:15.839 [2024-11-06 15:16:33.486965] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:15.839 [2024-11-06 15:16:33.486971] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:15.839 15:16:33 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.839 15:16:33 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:15.839 15:16:33 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.839 15:16:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.839 [2024-11-06 15:16:33.554906] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:15.839 15:16:33 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.839 15:16:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:15.839 15:16:33 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:15.839 15:16:33 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:15.839 15:16:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.839 ************************************ 00:04:15.839 START TEST scheduler_create_thread 00:04:15.839 ************************************ 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.839 2 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.839 3 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.839 4 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.839 5 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.839 6 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.839 7 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.839 8 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.839 9 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.839 15:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.411 10 00:04:16.411 15:16:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.411 15:16:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:16.411 15:16:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.411 15:16:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.792 15:16:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.792 15:16:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:17.792 15:16:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:17.792 15:16:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.792 15:16:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:18.362 15:16:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:18.362 15:16:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:18.362 15:16:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:18.362 15:16:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.302 15:16:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.302 15:16:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:19.302 15:16:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:19.302 15:16:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.302 15:16:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.872 15:16:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.872 00:04:19.872 real 0m4.223s 00:04:19.872 user 0m0.026s 00:04:19.872 sys 0m0.006s 00:04:19.872 15:16:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:19.872 15:16:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.872 ************************************ 00:04:19.872 END TEST scheduler_create_thread 00:04:19.872 ************************************ 00:04:20.133 15:16:37 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:20.133 15:16:37 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3540466 00:04:20.133 15:16:37 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 3540466 ']' 00:04:20.133 15:16:37 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 3540466 00:04:20.133 15:16:37 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:20.133 15:16:37 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:20.133 15:16:37 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3540466 00:04:20.133 15:16:37 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:20.133 15:16:37 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:20.133 15:16:37 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3540466' 00:04:20.133 killing process with pid 3540466 00:04:20.133 15:16:37 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 3540466 00:04:20.133 15:16:37 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 3540466 00:04:20.133 [2024-11-06 15:16:38.096545] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:20.393 00:04:20.393 real 0m5.844s 00:04:20.393 user 0m12.885s 00:04:20.393 sys 0m0.430s 00:04:20.393 15:16:38 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:20.393 15:16:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:20.393 ************************************ 00:04:20.393 END TEST event_scheduler 00:04:20.393 ************************************ 00:04:20.393 15:16:38 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:20.393 15:16:38 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:20.393 15:16:38 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:20.393 15:16:38 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:20.393 15:16:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:20.393 ************************************ 00:04:20.393 START TEST app_repeat 00:04:20.393 ************************************ 00:04:20.393 15:16:38 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:20.393 15:16:38 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.393 15:16:38 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.393 15:16:38 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:20.393 15:16:38 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:20.393 15:16:38 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:20.393 15:16:38 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:20.393 15:16:38 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:20.393 15:16:38 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3541541 00:04:20.393 15:16:38 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.393 15:16:38 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:20.393 15:16:38 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3541541' 00:04:20.393 Process app_repeat pid: 3541541 00:04:20.393 15:16:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:20.393 15:16:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:20.393 spdk_app_start Round 0 00:04:20.393 15:16:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3541541 /var/tmp/spdk-nbd.sock 00:04:20.393 15:16:38 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3541541 ']' 00:04:20.393 15:16:38 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:20.393 15:16:38 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:20.393 15:16:38 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:20.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:20.394 15:16:38 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:20.394 15:16:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:20.394 [2024-11-06 15:16:38.374521] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:20.394 [2024-11-06 15:16:38.374588] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3541541 ] 00:04:20.653 [2024-11-06 15:16:38.461703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:20.653 [2024-11-06 15:16:38.492919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.653 [2024-11-06 15:16:38.493024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.653 15:16:38 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:20.653 15:16:38 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:20.653 15:16:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:20.913 Malloc0 00:04:20.913 15:16:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:21.187 Malloc1 00:04:21.187 15:16:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:21.187 15:16:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.187 15:16:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:21.187 15:16:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:21.187 15:16:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.187 15:16:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:21.187 15:16:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:21.187 15:16:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.187 15:16:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:21.187 15:16:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:21.187 15:16:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.187 15:16:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:21.187 15:16:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:21.187 15:16:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:21.187 15:16:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:21.187 15:16:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:21.187 /dev/nbd0 00:04:21.447 15:16:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:21.447 15:16:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:21.447 15:16:39 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:21.447 15:16:39 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:21.447 15:16:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:21.447 15:16:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:21.447 15:16:39 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:21.447 15:16:39 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:21.447 15:16:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:21.447 15:16:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:21.447 15:16:39 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:21.447 1+0 records in 00:04:21.447 1+0 records out 00:04:21.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290627 s, 14.1 MB/s 00:04:21.447 15:16:39 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:21.447 15:16:39 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:21.447 15:16:39 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:21.447 15:16:39 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:21.447 15:16:39 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:21.447 15:16:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:21.447 15:16:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:21.447 15:16:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:21.447 /dev/nbd1 00:04:21.447 15:16:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:21.447 15:16:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:21.447 15:16:39 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:21.447 15:16:39 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:21.447 15:16:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:21.447 15:16:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:21.447 15:16:39 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:21.447 15:16:39 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:21.448 15:16:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:21.448 15:16:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:21.448 15:16:39 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:21.448 1+0 records in 00:04:21.448 1+0 records out 00:04:21.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257807 s, 15.9 MB/s 00:04:21.448 15:16:39 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:21.707 15:16:39 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:21.707 15:16:39 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:21.707 15:16:39 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:21.707 15:16:39 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:21.707 { 00:04:21.707 "nbd_device": "/dev/nbd0", 00:04:21.707 "bdev_name": "Malloc0" 00:04:21.707 }, 00:04:21.707 { 00:04:21.707 "nbd_device": "/dev/nbd1", 00:04:21.707 "bdev_name": "Malloc1" 00:04:21.707 } 00:04:21.707 ]' 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:21.707 { 00:04:21.707 "nbd_device": "/dev/nbd0", 00:04:21.707 "bdev_name": "Malloc0" 00:04:21.707 }, 00:04:21.707 { 00:04:21.707 "nbd_device": "/dev/nbd1", 00:04:21.707 "bdev_name": "Malloc1" 00:04:21.707 } 00:04:21.707 ]' 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:21.707 /dev/nbd1' 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:21.707 /dev/nbd1' 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:21.707 15:16:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:21.967 256+0 records in 00:04:21.967 256+0 records out 00:04:21.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121133 s, 86.6 MB/s 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:21.967 256+0 records in 00:04:21.967 256+0 records out 00:04:21.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118529 s, 88.5 MB/s 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:21.967 256+0 records in 00:04:21.967 256+0 records out 00:04:21.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126882 s, 82.6 MB/s 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:21.967 15:16:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:22.226 15:16:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:22.226 15:16:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:22.226 15:16:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:22.226 15:16:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:22.226 15:16:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:22.226 15:16:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:22.226 15:16:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:22.227 15:16:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:22.227 15:16:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:22.227 15:16:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.227 15:16:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:22.487 15:16:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:22.487 15:16:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:22.487 15:16:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:22.487 15:16:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:22.487 15:16:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:22.487 15:16:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:22.487 15:16:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:22.487 15:16:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:22.487 15:16:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:22.487 15:16:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:22.487 15:16:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:22.487 15:16:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:22.487 15:16:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:22.747 15:16:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:22.747 [2024-11-06 15:16:40.638181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:22.747 [2024-11-06 15:16:40.668378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.747 [2024-11-06 15:16:40.668378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:22.747 [2024-11-06 15:16:40.697509] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:22.747 [2024-11-06 15:16:40.697542] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:26.042 15:16:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:26.042 15:16:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:26.042 spdk_app_start Round 1 00:04:26.042 15:16:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3541541 /var/tmp/spdk-nbd.sock 00:04:26.042 15:16:43 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3541541 ']' 00:04:26.042 15:16:43 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:26.042 15:16:43 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:26.043 15:16:43 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:26.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:26.043 15:16:43 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:26.043 15:16:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:26.043 15:16:43 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:26.043 15:16:43 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:26.043 15:16:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:26.043 Malloc0 00:04:26.043 15:16:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:26.304 Malloc1 00:04:26.304 15:16:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:26.304 15:16:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.304 15:16:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:26.304 15:16:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:26.304 15:16:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.304 15:16:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:26.304 15:16:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:26.304 15:16:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.304 15:16:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:26.304 15:16:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:26.304 15:16:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.304 15:16:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:26.304 15:16:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:26.304 15:16:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:26.304 15:16:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.304 15:16:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:26.565 /dev/nbd0 00:04:26.565 15:16:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:26.565 15:16:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:26.565 15:16:44 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:26.565 15:16:44 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:26.565 15:16:44 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:26.565 15:16:44 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:26.565 15:16:44 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:26.565 15:16:44 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:26.565 15:16:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:26.565 15:16:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:26.565 15:16:44 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:26.565 1+0 records in 00:04:26.565 1+0 records out 00:04:26.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276835 s, 14.8 MB/s 00:04:26.565 15:16:44 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.565 15:16:44 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:26.565 15:16:44 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.565 15:16:44 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:26.565 15:16:44 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:26.565 15:16:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:26.565 15:16:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.565 15:16:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:26.565 /dev/nbd1 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:26.851 15:16:44 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:26.851 15:16:44 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:26.851 15:16:44 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:26.851 15:16:44 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:26.851 15:16:44 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:26.851 15:16:44 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:26.851 15:16:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:26.851 15:16:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:26.851 15:16:44 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:26.851 1+0 records in 00:04:26.851 1+0 records out 00:04:26.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290971 s, 14.1 MB/s 00:04:26.851 15:16:44 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.851 15:16:44 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:26.851 15:16:44 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.851 15:16:44 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:26.851 15:16:44 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:26.851 { 00:04:26.851 "nbd_device": "/dev/nbd0", 00:04:26.851 "bdev_name": "Malloc0" 00:04:26.851 }, 00:04:26.851 { 00:04:26.851 "nbd_device": "/dev/nbd1", 00:04:26.851 "bdev_name": "Malloc1" 00:04:26.851 } 00:04:26.851 ]' 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:26.851 { 00:04:26.851 "nbd_device": "/dev/nbd0", 00:04:26.851 "bdev_name": "Malloc0" 00:04:26.851 }, 00:04:26.851 { 00:04:26.851 "nbd_device": "/dev/nbd1", 00:04:26.851 "bdev_name": "Malloc1" 00:04:26.851 } 00:04:26.851 ]' 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:26.851 /dev/nbd1' 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:26.851 /dev/nbd1' 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:26.851 15:16:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:27.113 256+0 records in 00:04:27.113 256+0 records out 00:04:27.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012387 s, 84.7 MB/s 00:04:27.113 15:16:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:27.114 256+0 records in 00:04:27.114 256+0 records out 00:04:27.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118908 s, 88.2 MB/s 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:27.114 256+0 records in 00:04:27.114 256+0 records out 00:04:27.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128892 s, 81.4 MB/s 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:27.114 15:16:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:27.114 15:16:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:27.114 15:16:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:27.114 15:16:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:27.114 15:16:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:27.114 15:16:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:27.114 15:16:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:27.114 15:16:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:27.114 15:16:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:27.114 15:16:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:27.114 15:16:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:27.376 15:16:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:27.376 15:16:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:27.376 15:16:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:27.376 15:16:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:27.376 15:16:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:27.376 15:16:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:27.376 15:16:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:27.376 15:16:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:27.376 15:16:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:27.376 15:16:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.376 15:16:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:27.637 15:16:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:27.637 15:16:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:27.637 15:16:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:27.637 15:16:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:27.637 15:16:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:27.637 15:16:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:27.637 15:16:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:27.637 15:16:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:27.637 15:16:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:27.637 15:16:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:27.637 15:16:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:27.637 15:16:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:27.637 15:16:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:27.898 15:16:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:27.898 [2024-11-06 15:16:45.784447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:27.898 [2024-11-06 15:16:45.814256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.898 [2024-11-06 15:16:45.814257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.898 [2024-11-06 15:16:45.844014] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:27.898 [2024-11-06 15:16:45.844047] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:31.198 15:16:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:31.198 15:16:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:31.198 spdk_app_start Round 2 00:04:31.198 15:16:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3541541 /var/tmp/spdk-nbd.sock 00:04:31.198 15:16:48 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3541541 ']' 00:04:31.198 15:16:48 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:31.198 15:16:48 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:31.198 15:16:48 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:31.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:31.198 15:16:48 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:31.198 15:16:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:31.198 15:16:48 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:31.198 15:16:48 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:31.198 15:16:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:31.198 Malloc0 00:04:31.198 15:16:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:31.458 Malloc1 00:04:31.458 15:16:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:31.458 15:16:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.458 15:16:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.458 15:16:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:31.458 15:16:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.458 15:16:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:31.458 15:16:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:31.458 15:16:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.458 15:16:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.458 15:16:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:31.458 15:16:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.458 15:16:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:31.458 15:16:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:31.458 15:16:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:31.458 15:16:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.458 15:16:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:31.718 /dev/nbd0 00:04:31.718 15:16:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:31.718 15:16:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:31.718 1+0 records in 00:04:31.718 1+0 records out 00:04:31.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273397 s, 15.0 MB/s 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:31.718 15:16:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:31.718 15:16:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.718 15:16:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:31.718 /dev/nbd1 00:04:31.718 15:16:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:31.718 15:16:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:31.718 1+0 records in 00:04:31.718 1+0 records out 00:04:31.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289095 s, 14.2 MB/s 00:04:31.718 15:16:49 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.977 15:16:49 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:31.977 15:16:49 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.977 15:16:49 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:31.978 15:16:49 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:31.978 { 00:04:31.978 "nbd_device": "/dev/nbd0", 00:04:31.978 "bdev_name": "Malloc0" 00:04:31.978 }, 00:04:31.978 { 00:04:31.978 "nbd_device": "/dev/nbd1", 00:04:31.978 "bdev_name": "Malloc1" 00:04:31.978 } 00:04:31.978 ]' 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:31.978 { 00:04:31.978 "nbd_device": "/dev/nbd0", 00:04:31.978 "bdev_name": "Malloc0" 00:04:31.978 }, 00:04:31.978 { 00:04:31.978 "nbd_device": "/dev/nbd1", 00:04:31.978 "bdev_name": "Malloc1" 00:04:31.978 } 00:04:31.978 ]' 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:31.978 /dev/nbd1' 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:31.978 /dev/nbd1' 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:31.978 256+0 records in 00:04:31.978 256+0 records out 00:04:31.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117431 s, 89.3 MB/s 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.978 15:16:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:32.238 256+0 records in 00:04:32.238 256+0 records out 00:04:32.238 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124713 s, 84.1 MB/s 00:04:32.238 15:16:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:32.238 15:16:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:32.238 256+0 records in 00:04:32.238 256+0 records out 00:04:32.238 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129434 s, 81.0 MB/s 00:04:32.238 15:16:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:32.238 15:16:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.238 15:16:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:32.238 15:16:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:32.238 15:16:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.238 15:16:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:32.238 15:16:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:32.238 15:16:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:32.238 15:16:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:32.238 15:16:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:32.238 15:16:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:32.238 15:16:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.238 15:16:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:32.238 15:16:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.238 15:16:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.238 15:16:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:32.238 15:16:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:32.238 15:16:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:32.238 15:16:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:32.238 15:16:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:32.238 15:16:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:32.238 15:16:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:32.238 15:16:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.238 15:16:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.238 15:16:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:32.238 15:16:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:32.238 15:16:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.238 15:16:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:32.238 15:16:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:32.498 15:16:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:32.498 15:16:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:32.498 15:16:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:32.498 15:16:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.498 15:16:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.498 15:16:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:32.498 15:16:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:32.498 15:16:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.498 15:16:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:32.498 15:16:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.498 15:16:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:32.759 15:16:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:32.759 15:16:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:32.759 15:16:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:32.759 15:16:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:32.759 15:16:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:32.759 15:16:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:32.759 15:16:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:32.759 15:16:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:32.759 15:16:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:32.759 15:16:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:32.759 15:16:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:32.759 15:16:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:32.759 15:16:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:33.019 15:16:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:33.020 [2024-11-06 15:16:50.904657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:33.020 [2024-11-06 15:16:50.934505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.020 [2024-11-06 15:16:50.934505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.020 [2024-11-06 15:16:50.963779] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:33.020 [2024-11-06 15:16:50.963816] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:36.315 15:16:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3541541 /var/tmp/spdk-nbd.sock 00:04:36.315 15:16:53 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3541541 ']' 00:04:36.315 15:16:53 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:36.315 15:16:53 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:36.315 15:16:53 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:36.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:36.315 15:16:53 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:36.315 15:16:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:36.315 15:16:54 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:36.315 15:16:54 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:36.315 15:16:54 event.app_repeat -- event/event.sh@39 -- # killprocess 3541541 00:04:36.315 15:16:54 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 3541541 ']' 00:04:36.315 15:16:54 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 3541541 00:04:36.315 15:16:54 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:04:36.315 15:16:54 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:36.315 15:16:54 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3541541 00:04:36.315 15:16:54 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:36.315 15:16:54 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:36.315 15:16:54 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3541541' 00:04:36.315 killing process with pid 3541541 00:04:36.315 15:16:54 event.app_repeat -- common/autotest_common.sh@971 -- # kill 3541541 00:04:36.315 15:16:54 event.app_repeat -- common/autotest_common.sh@976 -- # wait 3541541 00:04:36.315 spdk_app_start is called in Round 0. 00:04:36.315 Shutdown signal received, stop current app iteration 00:04:36.315 Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 reinitialization... 00:04:36.315 spdk_app_start is called in Round 1. 00:04:36.315 Shutdown signal received, stop current app iteration 00:04:36.315 Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 reinitialization... 00:04:36.315 spdk_app_start is called in Round 2. 00:04:36.315 Shutdown signal received, stop current app iteration 00:04:36.315 Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 reinitialization... 00:04:36.315 spdk_app_start is called in Round 3. 00:04:36.315 Shutdown signal received, stop current app iteration 00:04:36.315 15:16:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:36.315 15:16:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:36.315 00:04:36.315 real 0m15.831s 00:04:36.315 user 0m34.761s 00:04:36.315 sys 0m2.288s 00:04:36.315 15:16:54 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:36.315 15:16:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:36.315 ************************************ 00:04:36.315 END TEST app_repeat 00:04:36.315 ************************************ 00:04:36.315 15:16:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:36.315 15:16:54 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:36.315 15:16:54 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:36.315 15:16:54 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:36.315 15:16:54 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.315 ************************************ 00:04:36.315 START TEST cpu_locks 00:04:36.315 ************************************ 00:04:36.315 15:16:54 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:36.576 * Looking for test storage... 00:04:36.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:36.576 15:16:54 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:36.576 15:16:54 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:04:36.576 15:16:54 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:36.576 15:16:54 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.576 15:16:54 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:36.576 15:16:54 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.576 15:16:54 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:36.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.576 --rc genhtml_branch_coverage=1 00:04:36.576 --rc genhtml_function_coverage=1 00:04:36.576 --rc genhtml_legend=1 00:04:36.576 --rc geninfo_all_blocks=1 00:04:36.576 --rc geninfo_unexecuted_blocks=1 00:04:36.576 00:04:36.576 ' 00:04:36.576 15:16:54 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:36.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.576 --rc genhtml_branch_coverage=1 00:04:36.576 --rc genhtml_function_coverage=1 00:04:36.576 --rc genhtml_legend=1 00:04:36.576 --rc geninfo_all_blocks=1 00:04:36.576 --rc geninfo_unexecuted_blocks=1 00:04:36.576 00:04:36.576 ' 00:04:36.576 15:16:54 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:36.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.576 --rc genhtml_branch_coverage=1 00:04:36.576 --rc genhtml_function_coverage=1 00:04:36.576 --rc genhtml_legend=1 00:04:36.576 --rc geninfo_all_blocks=1 00:04:36.576 --rc geninfo_unexecuted_blocks=1 00:04:36.576 00:04:36.576 ' 00:04:36.576 15:16:54 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:36.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.576 --rc genhtml_branch_coverage=1 00:04:36.576 --rc genhtml_function_coverage=1 00:04:36.576 --rc genhtml_legend=1 00:04:36.576 --rc geninfo_all_blocks=1 00:04:36.576 --rc geninfo_unexecuted_blocks=1 00:04:36.576 00:04:36.576 ' 00:04:36.576 15:16:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:36.576 15:16:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:36.576 15:16:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:36.576 15:16:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:36.576 15:16:54 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:36.576 15:16:54 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:36.576 15:16:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.576 ************************************ 00:04:36.576 START TEST default_locks 00:04:36.576 ************************************ 00:04:36.576 15:16:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:04:36.576 15:16:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3545067 00:04:36.576 15:16:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3545067 00:04:36.576 15:16:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.576 15:16:54 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3545067 ']' 00:04:36.576 15:16:54 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.576 15:16:54 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:36.576 15:16:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.576 15:16:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:36.576 15:16:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.836 [2024-11-06 15:16:54.559669] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:36.836 [2024-11-06 15:16:54.559736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3545067 ] 00:04:36.836 [2024-11-06 15:16:54.649661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.836 [2024-11-06 15:16:54.684138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.407 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:37.407 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:04:37.407 15:16:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3545067 00:04:37.407 15:16:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3545067 00:04:37.407 15:16:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:37.978 lslocks: write error 00:04:37.978 15:16:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3545067 00:04:37.978 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 3545067 ']' 00:04:37.978 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 3545067 00:04:37.978 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:04:37.978 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:37.978 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3545067 00:04:37.978 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:37.978 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:37.978 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3545067' 00:04:37.978 killing process with pid 3545067 00:04:37.978 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 3545067 00:04:37.978 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 3545067 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3545067 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3545067 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3545067 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3545067 ']' 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3545067) - No such process 00:04:38.240 ERROR: process (pid: 3545067) is no longer running 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:38.240 00:04:38.240 real 0m1.503s 00:04:38.240 user 0m1.611s 00:04:38.240 sys 0m0.527s 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:38.240 15:16:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.240 ************************************ 00:04:38.240 END TEST default_locks 00:04:38.240 ************************************ 00:04:38.240 15:16:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:38.240 15:16:56 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:38.240 15:16:56 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:38.240 15:16:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.240 ************************************ 00:04:38.240 START TEST default_locks_via_rpc 00:04:38.240 ************************************ 00:04:38.240 15:16:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:04:38.240 15:16:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3545371 00:04:38.240 15:16:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3545371 00:04:38.240 15:16:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.240 15:16:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3545371 ']' 00:04:38.240 15:16:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.240 15:16:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:38.240 15:16:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.240 15:16:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:38.240 15:16:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.240 [2024-11-06 15:16:56.122236] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:38.240 [2024-11-06 15:16:56.122290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3545371 ] 00:04:38.240 [2024-11-06 15:16:56.209145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.501 [2024-11-06 15:16:56.244202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.072 15:16:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:39.072 15:16:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:39.072 15:16:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:39.072 15:16:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.072 15:16:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.072 15:16:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.072 15:16:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:39.072 15:16:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:39.072 15:16:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:39.072 15:16:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:39.072 15:16:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:39.072 15:16:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.072 15:16:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.072 15:16:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.073 15:16:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3545371 00:04:39.073 15:16:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3545371 00:04:39.073 15:16:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:39.643 15:16:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3545371 00:04:39.643 15:16:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 3545371 ']' 00:04:39.643 15:16:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 3545371 00:04:39.643 15:16:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:04:39.643 15:16:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:39.643 15:16:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3545371 00:04:39.643 15:16:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:39.643 15:16:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:39.643 15:16:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3545371' 00:04:39.643 killing process with pid 3545371 00:04:39.643 15:16:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 3545371 00:04:39.643 15:16:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 3545371 00:04:39.904 00:04:39.904 real 0m1.592s 00:04:39.904 user 0m1.697s 00:04:39.904 sys 0m0.562s 00:04:39.904 15:16:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:39.904 15:16:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.904 ************************************ 00:04:39.904 END TEST default_locks_via_rpc 00:04:39.904 ************************************ 00:04:39.904 15:16:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:39.904 15:16:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:39.904 15:16:57 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:39.904 15:16:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.904 ************************************ 00:04:39.904 START TEST non_locking_app_on_locked_coremask 00:04:39.904 ************************************ 00:04:39.904 15:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:04:39.904 15:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3545725 00:04:39.904 15:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3545725 /var/tmp/spdk.sock 00:04:39.904 15:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.904 15:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3545725 ']' 00:04:39.904 15:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.904 15:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:39.904 15:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.904 15:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:39.904 15:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.904 [2024-11-06 15:16:57.791958] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:39.904 [2024-11-06 15:16:57.792013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3545725 ] 00:04:39.904 [2024-11-06 15:16:57.878196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.164 [2024-11-06 15:16:57.911863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.735 15:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:40.735 15:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:40.735 15:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3545867 00:04:40.735 15:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3545867 /var/tmp/spdk2.sock 00:04:40.735 15:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:40.735 15:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3545867 ']' 00:04:40.735 15:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:40.735 15:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:40.735 15:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:40.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:40.735 15:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:40.735 15:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.735 [2024-11-06 15:16:58.631766] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:40.735 [2024-11-06 15:16:58.631820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3545867 ] 00:04:40.995 [2024-11-06 15:16:58.719676] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:40.995 [2024-11-06 15:16:58.719699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.995 [2024-11-06 15:16:58.778104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.566 15:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:41.566 15:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:41.566 15:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3545725 00:04:41.566 15:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3545725 00:04:41.566 15:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:42.139 lslocks: write error 00:04:42.139 15:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3545725 00:04:42.139 15:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3545725 ']' 00:04:42.139 15:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3545725 00:04:42.139 15:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:42.139 15:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:42.139 15:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3545725 00:04:42.139 15:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:42.139 15:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:42.139 15:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3545725' 00:04:42.139 killing process with pid 3545725 00:04:42.139 15:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3545725 00:04:42.139 15:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3545725 00:04:42.400 15:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3545867 00:04:42.400 15:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3545867 ']' 00:04:42.400 15:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3545867 00:04:42.400 15:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:42.400 15:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:42.400 15:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3545867 00:04:42.400 15:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:42.400 15:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:42.400 15:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3545867' 00:04:42.400 killing process with pid 3545867 00:04:42.400 15:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3545867 00:04:42.400 15:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3545867 00:04:42.666 00:04:42.667 real 0m2.787s 00:04:42.667 user 0m3.112s 00:04:42.667 sys 0m0.838s 00:04:42.667 15:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:42.667 15:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.667 ************************************ 00:04:42.667 END TEST non_locking_app_on_locked_coremask 00:04:42.667 ************************************ 00:04:42.667 15:17:00 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:42.667 15:17:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:42.667 15:17:00 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:42.667 15:17:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.667 ************************************ 00:04:42.667 START TEST locking_app_on_unlocked_coremask 00:04:42.667 ************************************ 00:04:42.667 15:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:04:42.667 15:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3546244 00:04:42.667 15:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3546244 /var/tmp/spdk.sock 00:04:42.667 15:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:42.667 15:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3546244 ']' 00:04:42.667 15:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.667 15:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:42.667 15:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.667 15:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:42.667 15:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.965 [2024-11-06 15:17:00.648654] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:42.965 [2024-11-06 15:17:00.648705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3546244 ] 00:04:42.965 [2024-11-06 15:17:00.732718] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:42.965 [2024-11-06 15:17:00.732741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.965 [2024-11-06 15:17:00.763572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.576 15:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:43.576 15:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:43.576 15:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3546580 00:04:43.576 15:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3546580 /var/tmp/spdk2.sock 00:04:43.576 15:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3546580 ']' 00:04:43.576 15:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:43.576 15:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:43.576 15:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:43.576 15:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:43.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:43.576 15:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:43.576 15:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.576 [2024-11-06 15:17:01.487132] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:43.576 [2024-11-06 15:17:01.487184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3546580 ] 00:04:43.836 [2024-11-06 15:17:01.573166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.836 [2024-11-06 15:17:01.638767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.405 15:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:44.405 15:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:44.406 15:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3546580 00:04:44.406 15:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:44.406 15:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3546580 00:04:44.666 lslocks: write error 00:04:44.666 15:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3546244 00:04:44.666 15:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3546244 ']' 00:04:44.666 15:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3546244 00:04:44.666 15:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:44.666 15:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:44.666 15:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3546244 00:04:44.666 15:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:44.666 15:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:44.666 15:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3546244' 00:04:44.666 killing process with pid 3546244 00:04:44.666 15:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3546244 00:04:44.666 15:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3546244 00:04:45.236 15:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3546580 00:04:45.236 15:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3546580 ']' 00:04:45.236 15:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3546580 00:04:45.236 15:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:45.236 15:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:45.236 15:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3546580 00:04:45.236 15:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:45.236 15:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:45.236 15:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3546580' 00:04:45.236 killing process with pid 3546580 00:04:45.236 15:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3546580 00:04:45.236 15:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3546580 00:04:45.496 00:04:45.496 real 0m2.658s 00:04:45.496 user 0m2.975s 00:04:45.496 sys 0m0.796s 00:04:45.496 15:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:45.496 15:17:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.496 ************************************ 00:04:45.496 END TEST locking_app_on_unlocked_coremask 00:04:45.496 ************************************ 00:04:45.496 15:17:03 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:45.496 15:17:03 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:45.496 15:17:03 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:45.496 15:17:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.496 ************************************ 00:04:45.496 START TEST locking_app_on_locked_coremask 00:04:45.496 ************************************ 00:04:45.496 15:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:04:45.496 15:17:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3546954 00:04:45.496 15:17:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3546954 /var/tmp/spdk.sock 00:04:45.496 15:17:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:45.496 15:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3546954 ']' 00:04:45.496 15:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.496 15:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:45.496 15:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.496 15:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:45.496 15:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.496 [2024-11-06 15:17:03.392976] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:45.496 [2024-11-06 15:17:03.393033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3546954 ] 00:04:45.496 [2024-11-06 15:17:03.475775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.756 [2024-11-06 15:17:03.506011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.326 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:46.326 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:46.326 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3547020 00:04:46.326 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3547020 /var/tmp/spdk2.sock 00:04:46.326 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:46.326 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:46.326 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3547020 /var/tmp/spdk2.sock 00:04:46.326 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:46.326 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.326 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:46.326 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.326 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3547020 /var/tmp/spdk2.sock 00:04:46.326 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3547020 ']' 00:04:46.326 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:46.326 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:46.326 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:46.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:46.326 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:46.326 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.326 [2024-11-06 15:17:04.240283] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:46.326 [2024-11-06 15:17:04.240338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3547020 ] 00:04:46.586 [2024-11-06 15:17:04.331118] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3546954 has claimed it. 00:04:46.586 [2024-11-06 15:17:04.331151] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:47.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3547020) - No such process 00:04:47.156 ERROR: process (pid: 3547020) is no longer running 00:04:47.156 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:47.156 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:04:47.156 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:47.156 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:47.156 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:47.156 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:47.156 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3546954 00:04:47.156 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3546954 00:04:47.156 15:17:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:47.416 lslocks: write error 00:04:47.416 15:17:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3546954 00:04:47.416 15:17:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3546954 ']' 00:04:47.416 15:17:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3546954 00:04:47.416 15:17:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:47.416 15:17:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:47.416 15:17:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3546954 00:04:47.416 15:17:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:47.416 15:17:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:47.416 15:17:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3546954' 00:04:47.416 killing process with pid 3546954 00:04:47.416 15:17:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3546954 00:04:47.416 15:17:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3546954 00:04:47.675 00:04:47.675 real 0m2.221s 00:04:47.675 user 0m2.514s 00:04:47.675 sys 0m0.630s 00:04:47.675 15:17:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:47.675 15:17:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.675 ************************************ 00:04:47.675 END TEST locking_app_on_locked_coremask 00:04:47.675 ************************************ 00:04:47.675 15:17:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:47.675 15:17:05 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:47.675 15:17:05 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:47.675 15:17:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.675 ************************************ 00:04:47.675 START TEST locking_overlapped_coremask 00:04:47.675 ************************************ 00:04:47.675 15:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:04:47.675 15:17:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3547331 00:04:47.675 15:17:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3547331 /var/tmp/spdk.sock 00:04:47.675 15:17:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:47.675 15:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3547331 ']' 00:04:47.675 15:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.675 15:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:47.675 15:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.675 15:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:47.675 15:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.935 [2024-11-06 15:17:05.691701] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:47.935 [2024-11-06 15:17:05.691763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3547331 ] 00:04:47.935 [2024-11-06 15:17:05.774707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:47.935 [2024-11-06 15:17:05.807110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.935 [2024-11-06 15:17:05.807242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.935 [2024-11-06 15:17:05.807243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.507 15:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:48.507 15:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:48.507 15:17:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3547667 00:04:48.507 15:17:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3547667 /var/tmp/spdk2.sock 00:04:48.507 15:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:48.507 15:17:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:48.507 15:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3547667 /var/tmp/spdk2.sock 00:04:48.507 15:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:48.507 15:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.507 15:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:48.507 15:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.507 15:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3547667 /var/tmp/spdk2.sock 00:04:48.507 15:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3547667 ']' 00:04:48.507 15:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:48.507 15:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:48.767 15:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:48.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:48.767 15:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:48.767 15:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.767 [2024-11-06 15:17:06.540262] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:48.767 [2024-11-06 15:17:06.540315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3547667 ] 00:04:48.767 [2024-11-06 15:17:06.652338] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3547331 has claimed it. 00:04:48.767 [2024-11-06 15:17:06.652380] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:49.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3547667) - No such process 00:04:49.338 ERROR: process (pid: 3547667) is no longer running 00:04:49.338 15:17:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:49.338 15:17:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:04:49.338 15:17:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:49.338 15:17:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:49.338 15:17:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:49.338 15:17:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:49.338 15:17:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:49.338 15:17:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:49.338 15:17:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:49.338 15:17:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:49.338 15:17:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3547331 00:04:49.338 15:17:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 3547331 ']' 00:04:49.338 15:17:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 3547331 00:04:49.338 15:17:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:04:49.338 15:17:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:49.338 15:17:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3547331 00:04:49.338 15:17:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:49.338 15:17:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:49.338 15:17:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3547331' 00:04:49.338 killing process with pid 3547331 00:04:49.338 15:17:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 3547331 00:04:49.338 15:17:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 3547331 00:04:49.600 00:04:49.600 real 0m1.789s 00:04:49.600 user 0m5.163s 00:04:49.600 sys 0m0.399s 00:04:49.600 15:17:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.600 15:17:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.600 ************************************ 00:04:49.600 END TEST locking_overlapped_coremask 00:04:49.600 ************************************ 00:04:49.600 15:17:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:49.600 15:17:07 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:49.600 15:17:07 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:49.600 15:17:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.600 ************************************ 00:04:49.600 START TEST locking_overlapped_coremask_via_rpc 00:04:49.600 ************************************ 00:04:49.600 15:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:04:49.600 15:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3547730 00:04:49.600 15:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3547730 /var/tmp/spdk.sock 00:04:49.600 15:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:49.600 15:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3547730 ']' 00:04:49.600 15:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.600 15:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:49.600 15:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.600 15:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:49.600 15:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.600 [2024-11-06 15:17:07.548302] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:49.600 [2024-11-06 15:17:07.548358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3547730 ] 00:04:49.862 [2024-11-06 15:17:07.637535] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:49.862 [2024-11-06 15:17:07.637561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:49.862 [2024-11-06 15:17:07.672783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.862 [2024-11-06 15:17:07.672878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:49.862 [2024-11-06 15:17:07.672876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.432 15:17:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:50.432 15:17:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:50.432 15:17:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3548038 00:04:50.432 15:17:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3548038 /var/tmp/spdk2.sock 00:04:50.432 15:17:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3548038 ']' 00:04:50.432 15:17:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:50.433 15:17:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:50.433 15:17:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:50.433 15:17:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:50.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:50.433 15:17:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:50.433 15:17:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.433 [2024-11-06 15:17:08.405031] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:50.433 [2024-11-06 15:17:08.405083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3548038 ] 00:04:50.693 [2024-11-06 15:17:08.518489] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:50.693 [2024-11-06 15:17:08.518518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:50.693 [2024-11-06 15:17:08.596239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:50.693 [2024-11-06 15:17:08.596397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.693 [2024-11-06 15:17:08.596399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.264 [2024-11-06 15:17:09.201835] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3547730 has claimed it. 00:04:51.264 request: 00:04:51.264 { 00:04:51.264 "method": "framework_enable_cpumask_locks", 00:04:51.264 "req_id": 1 00:04:51.264 } 00:04:51.264 Got JSON-RPC error response 00:04:51.264 response: 00:04:51.264 { 00:04:51.264 "code": -32603, 00:04:51.264 "message": "Failed to claim CPU core: 2" 00:04:51.264 } 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3547730 /var/tmp/spdk.sock 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3547730 ']' 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:51.264 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.525 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:51.525 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:51.525 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3548038 /var/tmp/spdk2.sock 00:04:51.525 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3548038 ']' 00:04:51.525 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:51.525 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:51.525 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:51.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:51.525 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:51.525 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.785 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:51.785 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:51.785 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:51.786 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:51.786 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:51.786 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:51.786 00:04:51.786 real 0m2.087s 00:04:51.786 user 0m0.850s 00:04:51.786 sys 0m0.165s 00:04:51.786 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:51.786 15:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.786 ************************************ 00:04:51.786 END TEST locking_overlapped_coremask_via_rpc 00:04:51.786 ************************************ 00:04:51.786 15:17:09 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:51.786 15:17:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3547730 ]] 00:04:51.786 15:17:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3547730 00:04:51.786 15:17:09 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3547730 ']' 00:04:51.786 15:17:09 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3547730 00:04:51.786 15:17:09 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:04:51.786 15:17:09 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:51.786 15:17:09 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3547730 00:04:51.786 15:17:09 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:51.786 15:17:09 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:51.786 15:17:09 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3547730' 00:04:51.786 killing process with pid 3547730 00:04:51.786 15:17:09 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3547730 00:04:51.786 15:17:09 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3547730 00:04:52.046 15:17:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3548038 ]] 00:04:52.046 15:17:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3548038 00:04:52.046 15:17:09 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3548038 ']' 00:04:52.046 15:17:09 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3548038 00:04:52.046 15:17:09 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:04:52.046 15:17:09 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:52.046 15:17:09 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3548038 00:04:52.046 15:17:09 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:52.046 15:17:09 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:52.046 15:17:09 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3548038' 00:04:52.046 killing process with pid 3548038 00:04:52.046 15:17:09 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3548038 00:04:52.046 15:17:09 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3548038 00:04:52.306 15:17:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:52.306 15:17:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:52.306 15:17:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3547730 ]] 00:04:52.306 15:17:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3547730 00:04:52.306 15:17:10 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3547730 ']' 00:04:52.306 15:17:10 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3547730 00:04:52.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3547730) - No such process 00:04:52.306 15:17:10 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3547730 is not found' 00:04:52.306 Process with pid 3547730 is not found 00:04:52.306 15:17:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3548038 ]] 00:04:52.306 15:17:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3548038 00:04:52.306 15:17:10 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3548038 ']' 00:04:52.306 15:17:10 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3548038 00:04:52.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3548038) - No such process 00:04:52.306 15:17:10 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3548038 is not found' 00:04:52.306 Process with pid 3548038 is not found 00:04:52.306 15:17:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:52.306 00:04:52.306 real 0m15.906s 00:04:52.306 user 0m27.959s 00:04:52.306 sys 0m4.866s 00:04:52.306 15:17:10 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:52.306 15:17:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.306 ************************************ 00:04:52.306 END TEST cpu_locks 00:04:52.306 ************************************ 00:04:52.306 00:04:52.306 real 0m41.791s 00:04:52.306 user 1m22.160s 00:04:52.306 sys 0m8.258s 00:04:52.306 15:17:10 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:52.306 15:17:10 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.306 ************************************ 00:04:52.306 END TEST event 00:04:52.306 ************************************ 00:04:52.306 15:17:10 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:52.306 15:17:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:52.306 15:17:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:52.306 15:17:10 -- common/autotest_common.sh@10 -- # set +x 00:04:52.306 ************************************ 00:04:52.306 START TEST thread 00:04:52.306 ************************************ 00:04:52.306 15:17:10 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:52.567 * Looking for test storage... 00:04:52.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:52.567 15:17:10 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:52.567 15:17:10 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:04:52.567 15:17:10 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:52.567 15:17:10 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:52.567 15:17:10 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.567 15:17:10 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.567 15:17:10 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.567 15:17:10 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.567 15:17:10 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.567 15:17:10 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.567 15:17:10 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.567 15:17:10 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.567 15:17:10 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.567 15:17:10 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.567 15:17:10 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.567 15:17:10 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:52.567 15:17:10 thread -- scripts/common.sh@345 -- # : 1 00:04:52.567 15:17:10 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.567 15:17:10 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.567 15:17:10 thread -- scripts/common.sh@365 -- # decimal 1 00:04:52.567 15:17:10 thread -- scripts/common.sh@353 -- # local d=1 00:04:52.567 15:17:10 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.567 15:17:10 thread -- scripts/common.sh@355 -- # echo 1 00:04:52.567 15:17:10 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.567 15:17:10 thread -- scripts/common.sh@366 -- # decimal 2 00:04:52.567 15:17:10 thread -- scripts/common.sh@353 -- # local d=2 00:04:52.567 15:17:10 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.567 15:17:10 thread -- scripts/common.sh@355 -- # echo 2 00:04:52.567 15:17:10 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.567 15:17:10 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.567 15:17:10 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.567 15:17:10 thread -- scripts/common.sh@368 -- # return 0 00:04:52.567 15:17:10 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.567 15:17:10 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:52.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.567 --rc genhtml_branch_coverage=1 00:04:52.567 --rc genhtml_function_coverage=1 00:04:52.567 --rc genhtml_legend=1 00:04:52.567 --rc geninfo_all_blocks=1 00:04:52.567 --rc geninfo_unexecuted_blocks=1 00:04:52.567 00:04:52.567 ' 00:04:52.567 15:17:10 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:52.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.567 --rc genhtml_branch_coverage=1 00:04:52.567 --rc genhtml_function_coverage=1 00:04:52.567 --rc genhtml_legend=1 00:04:52.567 --rc geninfo_all_blocks=1 00:04:52.567 --rc geninfo_unexecuted_blocks=1 00:04:52.568 00:04:52.568 ' 00:04:52.568 15:17:10 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:52.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.568 --rc genhtml_branch_coverage=1 00:04:52.568 --rc genhtml_function_coverage=1 00:04:52.568 --rc genhtml_legend=1 00:04:52.568 --rc geninfo_all_blocks=1 00:04:52.568 --rc geninfo_unexecuted_blocks=1 00:04:52.568 00:04:52.568 ' 00:04:52.568 15:17:10 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:52.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.568 --rc genhtml_branch_coverage=1 00:04:52.568 --rc genhtml_function_coverage=1 00:04:52.568 --rc genhtml_legend=1 00:04:52.568 --rc geninfo_all_blocks=1 00:04:52.568 --rc geninfo_unexecuted_blocks=1 00:04:52.568 00:04:52.568 ' 00:04:52.568 15:17:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:52.568 15:17:10 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:04:52.568 15:17:10 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:52.568 15:17:10 thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.568 ************************************ 00:04:52.568 START TEST thread_poller_perf 00:04:52.568 ************************************ 00:04:52.568 15:17:10 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:52.568 [2024-11-06 15:17:10.524523] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:52.568 [2024-11-06 15:17:10.524619] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3548488 ] 00:04:52.828 [2024-11-06 15:17:10.610659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.828 [2024-11-06 15:17:10.650119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.828 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:53.771 [2024-11-06T14:17:11.754Z] ====================================== 00:04:53.771 [2024-11-06T14:17:11.754Z] busy:2407984654 (cyc) 00:04:53.771 [2024-11-06T14:17:11.754Z] total_run_count: 419000 00:04:53.771 [2024-11-06T14:17:11.754Z] tsc_hz: 2400000000 (cyc) 00:04:53.771 [2024-11-06T14:17:11.754Z] ====================================== 00:04:53.771 [2024-11-06T14:17:11.754Z] poller_cost: 5746 (cyc), 2394 (nsec) 00:04:53.771 00:04:53.771 real 0m1.181s 00:04:53.771 user 0m1.100s 00:04:53.771 sys 0m0.076s 00:04:53.771 15:17:11 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:53.771 15:17:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:53.771 ************************************ 00:04:53.771 END TEST thread_poller_perf 00:04:53.771 ************************************ 00:04:53.771 15:17:11 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:53.771 15:17:11 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:04:53.771 15:17:11 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:53.771 15:17:11 thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.032 ************************************ 00:04:54.032 START TEST thread_poller_perf 00:04:54.032 ************************************ 00:04:54.032 15:17:11 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:54.032 [2024-11-06 15:17:11.782122] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:54.032 [2024-11-06 15:17:11.782210] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3548840 ] 00:04:54.032 [2024-11-06 15:17:11.871979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.032 [2024-11-06 15:17:11.903853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.032 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:54.973 [2024-11-06T14:17:12.956Z] ====================================== 00:04:54.973 [2024-11-06T14:17:12.956Z] busy:2401410068 (cyc) 00:04:54.973 [2024-11-06T14:17:12.956Z] total_run_count: 5564000 00:04:54.973 [2024-11-06T14:17:12.956Z] tsc_hz: 2400000000 (cyc) 00:04:54.973 [2024-11-06T14:17:12.956Z] ====================================== 00:04:54.973 [2024-11-06T14:17:12.956Z] poller_cost: 431 (cyc), 179 (nsec) 00:04:54.973 00:04:54.973 real 0m1.170s 00:04:54.973 user 0m1.090s 00:04:54.973 sys 0m0.077s 00:04:54.973 15:17:12 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:54.973 15:17:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:54.973 ************************************ 00:04:54.973 END TEST thread_poller_perf 00:04:54.974 ************************************ 00:04:55.234 15:17:12 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:55.234 00:04:55.234 real 0m2.701s 00:04:55.234 user 0m2.366s 00:04:55.234 sys 0m0.346s 00:04:55.234 15:17:12 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:55.234 15:17:12 thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.234 ************************************ 00:04:55.234 END TEST thread 00:04:55.234 ************************************ 00:04:55.234 15:17:13 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:55.234 15:17:13 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:55.234 15:17:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:55.234 15:17:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:55.234 15:17:13 -- common/autotest_common.sh@10 -- # set +x 00:04:55.235 ************************************ 00:04:55.235 START TEST app_cmdline 00:04:55.235 ************************************ 00:04:55.235 15:17:13 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:55.235 * Looking for test storage... 00:04:55.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:55.235 15:17:13 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:55.235 15:17:13 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:04:55.235 15:17:13 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:55.495 15:17:13 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:55.495 15:17:13 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.495 15:17:13 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.495 15:17:13 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.495 15:17:13 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.495 15:17:13 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.495 15:17:13 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.495 15:17:13 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.495 15:17:13 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.495 15:17:13 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.495 15:17:13 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.495 15:17:13 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.495 15:17:13 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:55.495 15:17:13 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:55.495 15:17:13 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.496 15:17:13 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.496 15:17:13 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:55.496 15:17:13 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:55.496 15:17:13 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.496 15:17:13 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:55.496 15:17:13 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.496 15:17:13 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:55.496 15:17:13 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:55.496 15:17:13 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.496 15:17:13 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:55.496 15:17:13 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.496 15:17:13 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.496 15:17:13 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.496 15:17:13 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:55.496 15:17:13 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.496 15:17:13 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:55.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.496 --rc genhtml_branch_coverage=1 00:04:55.496 --rc genhtml_function_coverage=1 00:04:55.496 --rc genhtml_legend=1 00:04:55.496 --rc geninfo_all_blocks=1 00:04:55.496 --rc geninfo_unexecuted_blocks=1 00:04:55.496 00:04:55.496 ' 00:04:55.496 15:17:13 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:55.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.496 --rc genhtml_branch_coverage=1 00:04:55.496 --rc genhtml_function_coverage=1 00:04:55.496 --rc genhtml_legend=1 00:04:55.496 --rc geninfo_all_blocks=1 00:04:55.496 --rc geninfo_unexecuted_blocks=1 00:04:55.496 00:04:55.496 ' 00:04:55.496 15:17:13 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:55.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.496 --rc genhtml_branch_coverage=1 00:04:55.496 --rc genhtml_function_coverage=1 00:04:55.496 --rc genhtml_legend=1 00:04:55.496 --rc geninfo_all_blocks=1 00:04:55.496 --rc geninfo_unexecuted_blocks=1 00:04:55.496 00:04:55.496 ' 00:04:55.496 15:17:13 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:55.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.496 --rc genhtml_branch_coverage=1 00:04:55.496 --rc genhtml_function_coverage=1 00:04:55.496 --rc genhtml_legend=1 00:04:55.496 --rc geninfo_all_blocks=1 00:04:55.496 --rc geninfo_unexecuted_blocks=1 00:04:55.496 00:04:55.496 ' 00:04:55.496 15:17:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:55.496 15:17:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3549239 00:04:55.496 15:17:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3549239 00:04:55.496 15:17:13 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:55.496 15:17:13 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 3549239 ']' 00:04:55.496 15:17:13 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.496 15:17:13 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:55.496 15:17:13 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.496 15:17:13 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:55.496 15:17:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:55.496 [2024-11-06 15:17:13.322228] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:04:55.496 [2024-11-06 15:17:13.322284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3549239 ] 00:04:55.496 [2024-11-06 15:17:13.406458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.496 [2024-11-06 15:17:13.447003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.437 15:17:14 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:56.437 15:17:14 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:04:56.437 15:17:14 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:56.437 { 00:04:56.437 "version": "SPDK v25.01-pre git sha1 924c8133b", 00:04:56.437 "fields": { 00:04:56.437 "major": 25, 00:04:56.437 "minor": 1, 00:04:56.437 "patch": 0, 00:04:56.437 "suffix": "-pre", 00:04:56.437 "commit": "924c8133b" 00:04:56.437 } 00:04:56.437 } 00:04:56.437 15:17:14 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:56.437 15:17:14 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:56.437 15:17:14 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:56.437 15:17:14 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:56.437 15:17:14 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:56.437 15:17:14 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:56.437 15:17:14 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:56.437 15:17:14 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.437 15:17:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:56.437 15:17:14 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.437 15:17:14 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:56.437 15:17:14 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:56.437 15:17:14 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:56.437 15:17:14 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:04:56.437 15:17:14 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:56.437 15:17:14 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:56.437 15:17:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.437 15:17:14 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:56.437 15:17:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.437 15:17:14 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:56.437 15:17:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.437 15:17:14 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:56.437 15:17:14 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:56.437 15:17:14 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:56.698 request: 00:04:56.698 { 00:04:56.698 "method": "env_dpdk_get_mem_stats", 00:04:56.698 "req_id": 1 00:04:56.698 } 00:04:56.698 Got JSON-RPC error response 00:04:56.698 response: 00:04:56.698 { 00:04:56.698 "code": -32601, 00:04:56.698 "message": "Method not found" 00:04:56.698 } 00:04:56.698 15:17:14 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:04:56.698 15:17:14 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:56.698 15:17:14 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:56.698 15:17:14 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:56.698 15:17:14 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3549239 00:04:56.698 15:17:14 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 3549239 ']' 00:04:56.698 15:17:14 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 3549239 00:04:56.698 15:17:14 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:04:56.698 15:17:14 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:56.698 15:17:14 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3549239 00:04:56.698 15:17:14 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:56.698 15:17:14 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:56.698 15:17:14 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3549239' 00:04:56.698 killing process with pid 3549239 00:04:56.698 15:17:14 app_cmdline -- common/autotest_common.sh@971 -- # kill 3549239 00:04:56.698 15:17:14 app_cmdline -- common/autotest_common.sh@976 -- # wait 3549239 00:04:56.960 00:04:56.960 real 0m1.716s 00:04:56.960 user 0m2.063s 00:04:56.960 sys 0m0.460s 00:04:56.960 15:17:14 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:56.960 15:17:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:56.960 ************************************ 00:04:56.960 END TEST app_cmdline 00:04:56.960 ************************************ 00:04:56.960 15:17:14 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:56.960 15:17:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:56.960 15:17:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:56.960 15:17:14 -- common/autotest_common.sh@10 -- # set +x 00:04:56.960 ************************************ 00:04:56.960 START TEST version 00:04:56.960 ************************************ 00:04:56.960 15:17:14 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:56.960 * Looking for test storage... 00:04:57.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:57.222 15:17:14 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:57.222 15:17:14 version -- common/autotest_common.sh@1691 -- # lcov --version 00:04:57.222 15:17:14 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:57.222 15:17:15 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:57.222 15:17:15 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.222 15:17:15 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.222 15:17:15 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.222 15:17:15 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.222 15:17:15 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.222 15:17:15 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.222 15:17:15 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.222 15:17:15 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.222 15:17:15 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.222 15:17:15 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.222 15:17:15 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.222 15:17:15 version -- scripts/common.sh@344 -- # case "$op" in 00:04:57.222 15:17:15 version -- scripts/common.sh@345 -- # : 1 00:04:57.222 15:17:15 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.222 15:17:15 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.222 15:17:15 version -- scripts/common.sh@365 -- # decimal 1 00:04:57.222 15:17:15 version -- scripts/common.sh@353 -- # local d=1 00:04:57.222 15:17:15 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.222 15:17:15 version -- scripts/common.sh@355 -- # echo 1 00:04:57.222 15:17:15 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.222 15:17:15 version -- scripts/common.sh@366 -- # decimal 2 00:04:57.222 15:17:15 version -- scripts/common.sh@353 -- # local d=2 00:04:57.222 15:17:15 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.222 15:17:15 version -- scripts/common.sh@355 -- # echo 2 00:04:57.222 15:17:15 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.222 15:17:15 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.222 15:17:15 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.222 15:17:15 version -- scripts/common.sh@368 -- # return 0 00:04:57.222 15:17:15 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.222 15:17:15 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:57.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.222 --rc genhtml_branch_coverage=1 00:04:57.222 --rc genhtml_function_coverage=1 00:04:57.222 --rc genhtml_legend=1 00:04:57.222 --rc geninfo_all_blocks=1 00:04:57.222 --rc geninfo_unexecuted_blocks=1 00:04:57.222 00:04:57.222 ' 00:04:57.222 15:17:15 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:57.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.222 --rc genhtml_branch_coverage=1 00:04:57.222 --rc genhtml_function_coverage=1 00:04:57.222 --rc genhtml_legend=1 00:04:57.222 --rc geninfo_all_blocks=1 00:04:57.222 --rc geninfo_unexecuted_blocks=1 00:04:57.222 00:04:57.222 ' 00:04:57.222 15:17:15 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:57.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.222 --rc genhtml_branch_coverage=1 00:04:57.222 --rc genhtml_function_coverage=1 00:04:57.222 --rc genhtml_legend=1 00:04:57.222 --rc geninfo_all_blocks=1 00:04:57.222 --rc geninfo_unexecuted_blocks=1 00:04:57.222 00:04:57.222 ' 00:04:57.222 15:17:15 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:57.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.222 --rc genhtml_branch_coverage=1 00:04:57.222 --rc genhtml_function_coverage=1 00:04:57.222 --rc genhtml_legend=1 00:04:57.222 --rc geninfo_all_blocks=1 00:04:57.222 --rc geninfo_unexecuted_blocks=1 00:04:57.222 00:04:57.222 ' 00:04:57.222 15:17:15 version -- app/version.sh@17 -- # get_header_version major 00:04:57.222 15:17:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:57.222 15:17:15 version -- app/version.sh@14 -- # cut -f2 00:04:57.222 15:17:15 version -- app/version.sh@14 -- # tr -d '"' 00:04:57.222 15:17:15 version -- app/version.sh@17 -- # major=25 00:04:57.222 15:17:15 version -- app/version.sh@18 -- # get_header_version minor 00:04:57.222 15:17:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:57.222 15:17:15 version -- app/version.sh@14 -- # cut -f2 00:04:57.222 15:17:15 version -- app/version.sh@14 -- # tr -d '"' 00:04:57.222 15:17:15 version -- app/version.sh@18 -- # minor=1 00:04:57.222 15:17:15 version -- app/version.sh@19 -- # get_header_version patch 00:04:57.222 15:17:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:57.222 15:17:15 version -- app/version.sh@14 -- # cut -f2 00:04:57.222 15:17:15 version -- app/version.sh@14 -- # tr -d '"' 00:04:57.222 15:17:15 version -- app/version.sh@19 -- # patch=0 00:04:57.222 15:17:15 version -- app/version.sh@20 -- # get_header_version suffix 00:04:57.222 15:17:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:57.222 15:17:15 version -- app/version.sh@14 -- # cut -f2 00:04:57.222 15:17:15 version -- app/version.sh@14 -- # tr -d '"' 00:04:57.222 15:17:15 version -- app/version.sh@20 -- # suffix=-pre 00:04:57.222 15:17:15 version -- app/version.sh@22 -- # version=25.1 00:04:57.222 15:17:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:57.222 15:17:15 version -- app/version.sh@28 -- # version=25.1rc0 00:04:57.222 15:17:15 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:57.222 15:17:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:57.222 15:17:15 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:57.222 15:17:15 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:57.222 00:04:57.222 real 0m0.281s 00:04:57.222 user 0m0.159s 00:04:57.222 sys 0m0.172s 00:04:57.222 15:17:15 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:57.222 15:17:15 version -- common/autotest_common.sh@10 -- # set +x 00:04:57.222 ************************************ 00:04:57.222 END TEST version 00:04:57.222 ************************************ 00:04:57.222 15:17:15 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:57.222 15:17:15 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:57.222 15:17:15 -- spdk/autotest.sh@194 -- # uname -s 00:04:57.222 15:17:15 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:57.222 15:17:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:57.222 15:17:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:57.222 15:17:15 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:57.222 15:17:15 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:04:57.222 15:17:15 -- spdk/autotest.sh@256 -- # timing_exit lib 00:04:57.222 15:17:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:57.222 15:17:15 -- common/autotest_common.sh@10 -- # set +x 00:04:57.484 15:17:15 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:04:57.484 15:17:15 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:04:57.484 15:17:15 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:04:57.484 15:17:15 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:04:57.484 15:17:15 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:04:57.484 15:17:15 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:04:57.484 15:17:15 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:57.484 15:17:15 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:04:57.484 15:17:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:57.484 15:17:15 -- common/autotest_common.sh@10 -- # set +x 00:04:57.484 ************************************ 00:04:57.484 START TEST nvmf_tcp 00:04:57.484 ************************************ 00:04:57.484 15:17:15 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:57.484 * Looking for test storage... 00:04:57.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:57.484 15:17:15 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:57.484 15:17:15 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:57.484 15:17:15 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:57.484 15:17:15 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.484 15:17:15 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:57.484 15:17:15 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.484 15:17:15 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:57.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.484 --rc genhtml_branch_coverage=1 00:04:57.484 --rc genhtml_function_coverage=1 00:04:57.484 --rc genhtml_legend=1 00:04:57.484 --rc geninfo_all_blocks=1 00:04:57.484 --rc geninfo_unexecuted_blocks=1 00:04:57.484 00:04:57.484 ' 00:04:57.484 15:17:15 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:57.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.484 --rc genhtml_branch_coverage=1 00:04:57.484 --rc genhtml_function_coverage=1 00:04:57.484 --rc genhtml_legend=1 00:04:57.484 --rc geninfo_all_blocks=1 00:04:57.484 --rc geninfo_unexecuted_blocks=1 00:04:57.484 00:04:57.484 ' 00:04:57.484 15:17:15 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:57.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.484 --rc genhtml_branch_coverage=1 00:04:57.484 --rc genhtml_function_coverage=1 00:04:57.484 --rc genhtml_legend=1 00:04:57.484 --rc geninfo_all_blocks=1 00:04:57.484 --rc geninfo_unexecuted_blocks=1 00:04:57.485 00:04:57.485 ' 00:04:57.485 15:17:15 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:57.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.485 --rc genhtml_branch_coverage=1 00:04:57.485 --rc genhtml_function_coverage=1 00:04:57.485 --rc genhtml_legend=1 00:04:57.485 --rc geninfo_all_blocks=1 00:04:57.485 --rc geninfo_unexecuted_blocks=1 00:04:57.485 00:04:57.485 ' 00:04:57.485 15:17:15 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:57.485 15:17:15 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:57.485 15:17:15 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:57.485 15:17:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:04:57.485 15:17:15 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:57.485 15:17:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:57.747 ************************************ 00:04:57.747 START TEST nvmf_target_core 00:04:57.747 ************************************ 00:04:57.747 15:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:57.747 * Looking for test storage... 00:04:57.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:57.747 15:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:57.747 15:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:04:57.747 15:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:57.747 15:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:57.747 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.747 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.747 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.747 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.747 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.747 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.747 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:57.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.748 --rc genhtml_branch_coverage=1 00:04:57.748 --rc genhtml_function_coverage=1 00:04:57.748 --rc genhtml_legend=1 00:04:57.748 --rc geninfo_all_blocks=1 00:04:57.748 --rc geninfo_unexecuted_blocks=1 00:04:57.748 00:04:57.748 ' 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:57.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.748 --rc genhtml_branch_coverage=1 00:04:57.748 --rc genhtml_function_coverage=1 00:04:57.748 --rc genhtml_legend=1 00:04:57.748 --rc geninfo_all_blocks=1 00:04:57.748 --rc geninfo_unexecuted_blocks=1 00:04:57.748 00:04:57.748 ' 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:57.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.748 --rc genhtml_branch_coverage=1 00:04:57.748 --rc genhtml_function_coverage=1 00:04:57.748 --rc genhtml_legend=1 00:04:57.748 --rc geninfo_all_blocks=1 00:04:57.748 --rc geninfo_unexecuted_blocks=1 00:04:57.748 00:04:57.748 ' 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:57.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.748 --rc genhtml_branch_coverage=1 00:04:57.748 --rc genhtml_function_coverage=1 00:04:57.748 --rc genhtml_legend=1 00:04:57.748 --rc geninfo_all_blocks=1 00:04:57.748 --rc geninfo_unexecuted_blocks=1 00:04:57.748 00:04:57.748 ' 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:57.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:57.748 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.010 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.010 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.010 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:58.011 ************************************ 00:04:58.011 START TEST nvmf_abort 00:04:58.011 ************************************ 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:58.011 * Looking for test storage... 00:04:58.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:58.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.011 --rc genhtml_branch_coverage=1 00:04:58.011 --rc genhtml_function_coverage=1 00:04:58.011 --rc genhtml_legend=1 00:04:58.011 --rc geninfo_all_blocks=1 00:04:58.011 --rc geninfo_unexecuted_blocks=1 00:04:58.011 00:04:58.011 ' 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:58.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.011 --rc genhtml_branch_coverage=1 00:04:58.011 --rc genhtml_function_coverage=1 00:04:58.011 --rc genhtml_legend=1 00:04:58.011 --rc geninfo_all_blocks=1 00:04:58.011 --rc geninfo_unexecuted_blocks=1 00:04:58.011 00:04:58.011 ' 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:58.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.011 --rc genhtml_branch_coverage=1 00:04:58.011 --rc genhtml_function_coverage=1 00:04:58.011 --rc genhtml_legend=1 00:04:58.011 --rc geninfo_all_blocks=1 00:04:58.011 --rc geninfo_unexecuted_blocks=1 00:04:58.011 00:04:58.011 ' 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:58.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.011 --rc genhtml_branch_coverage=1 00:04:58.011 --rc genhtml_function_coverage=1 00:04:58.011 --rc genhtml_legend=1 00:04:58.011 --rc geninfo_all_blocks=1 00:04:58.011 --rc geninfo_unexecuted_blocks=1 00:04:58.011 00:04:58.011 ' 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:58.011 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.273 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.273 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.273 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.273 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.273 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.273 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.273 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:58.274 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.274 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:58.274 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.274 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.274 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.274 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.274 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.274 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.274 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.274 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.274 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.274 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.274 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:58.274 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:58.274 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:58.274 15:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:58.274 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:58.274 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:58.274 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:58.274 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:58.274 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:58.274 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:58.274 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:58.274 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:58.274 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:58.274 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:58.274 15:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.418 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:06.418 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:06.418 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:06.418 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:06.418 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:06.418 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:06.418 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:06.419 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:06.419 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:06.419 Found net devices under 0000:31:00.0: cvl_0_0 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:06.419 Found net devices under 0000:31:00.1: cvl_0_1 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:06.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:06.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:05:06.419 00:05:06.419 --- 10.0.0.2 ping statistics --- 00:05:06.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:06.419 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:06.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:06.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:05:06.419 00:05:06.419 --- 10.0.0.1 ping statistics --- 00:05:06.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:06.419 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:06.419 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:06.420 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.420 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3553761 00:05:06.420 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3553761 00:05:06.420 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:06.420 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 3553761 ']' 00:05:06.420 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.420 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:06.420 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.420 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:06.420 15:17:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.420 [2024-11-06 15:17:23.734304] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:05:06.420 [2024-11-06 15:17:23.734368] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:06.420 [2024-11-06 15:17:23.838903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:06.420 [2024-11-06 15:17:23.893519] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:06.420 [2024-11-06 15:17:23.893575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:06.420 [2024-11-06 15:17:23.893584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:06.420 [2024-11-06 15:17:23.893591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:06.420 [2024-11-06 15:17:23.893597] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:06.420 [2024-11-06 15:17:23.895445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.420 [2024-11-06 15:17:23.895606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.420 [2024-11-06 15:17:23.895607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.681 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:06.681 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:05:06.681 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:06.681 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.681 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.681 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:06.681 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:06.681 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.681 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.681 [2024-11-06 15:17:24.616308] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:06.681 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.681 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:06.681 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.681 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.943 Malloc0 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.943 Delay0 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.943 [2024-11-06 15:17:24.708802] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.943 15:17:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:06.943 [2024-11-06 15:17:24.848531] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:09.490 Initializing NVMe Controllers 00:05:09.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:09.490 controller IO queue size 128 less than required 00:05:09.490 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:09.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:09.490 Initialization complete. Launching workers. 00:05:09.490 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28308 00:05:09.490 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28369, failed to submit 62 00:05:09.490 success 28312, unsuccessful 57, failed 0 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:09.490 rmmod nvme_tcp 00:05:09.490 rmmod nvme_fabrics 00:05:09.490 rmmod nvme_keyring 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3553761 ']' 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3553761 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 3553761 ']' 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 3553761 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:09.490 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3553761 00:05:09.491 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:09.491 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:09.491 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3553761' 00:05:09.491 killing process with pid 3553761 00:05:09.491 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 3553761 00:05:09.491 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 3553761 00:05:09.491 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:09.491 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:09.491 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:09.491 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:09.491 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:09.491 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:09.491 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:09.491 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:09.491 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:09.491 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:09.491 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:09.491 15:17:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:12.036 00:05:12.036 real 0m13.640s 00:05:12.036 user 0m14.404s 00:05:12.036 sys 0m6.725s 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.036 ************************************ 00:05:12.036 END TEST nvmf_abort 00:05:12.036 ************************************ 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:12.036 ************************************ 00:05:12.036 START TEST nvmf_ns_hotplug_stress 00:05:12.036 ************************************ 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:12.036 * Looking for test storage... 00:05:12.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:12.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.036 --rc genhtml_branch_coverage=1 00:05:12.036 --rc genhtml_function_coverage=1 00:05:12.036 --rc genhtml_legend=1 00:05:12.036 --rc geninfo_all_blocks=1 00:05:12.036 --rc geninfo_unexecuted_blocks=1 00:05:12.036 00:05:12.036 ' 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:12.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.036 --rc genhtml_branch_coverage=1 00:05:12.036 --rc genhtml_function_coverage=1 00:05:12.036 --rc genhtml_legend=1 00:05:12.036 --rc geninfo_all_blocks=1 00:05:12.036 --rc geninfo_unexecuted_blocks=1 00:05:12.036 00:05:12.036 ' 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:12.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.036 --rc genhtml_branch_coverage=1 00:05:12.036 --rc genhtml_function_coverage=1 00:05:12.036 --rc genhtml_legend=1 00:05:12.036 --rc geninfo_all_blocks=1 00:05:12.036 --rc geninfo_unexecuted_blocks=1 00:05:12.036 00:05:12.036 ' 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:12.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.036 --rc genhtml_branch_coverage=1 00:05:12.036 --rc genhtml_function_coverage=1 00:05:12.036 --rc genhtml_legend=1 00:05:12.036 --rc geninfo_all_blocks=1 00:05:12.036 --rc geninfo_unexecuted_blocks=1 00:05:12.036 00:05:12.036 ' 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.036 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:12.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:12.037 15:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:20.183 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:20.183 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:20.183 Found net devices under 0000:31:00.0: cvl_0_0 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:20.183 Found net devices under 0000:31:00.1: cvl_0_1 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:20.183 15:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:20.183 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:20.183 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:20.183 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:20.183 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:20.183 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:20.183 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:20.183 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:20.183 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:20.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:20.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:05:20.183 00:05:20.183 --- 10.0.0.2 ping statistics --- 00:05:20.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:20.183 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:20.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:20.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:05:20.184 00:05:20.184 --- 10.0.0.1 ping statistics --- 00:05:20.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:20.184 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3558676 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3558676 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 3558676 ']' 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:20.184 [2024-11-06 15:17:37.362702] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:05:20.184 [2024-11-06 15:17:37.362782] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:20.184 [2024-11-06 15:17:37.436561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:20.184 [2024-11-06 15:17:37.483307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:20.184 [2024-11-06 15:17:37.483355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:20.184 [2024-11-06 15:17:37.483362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:20.184 [2024-11-06 15:17:37.483367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:20.184 [2024-11-06 15:17:37.483372] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:20.184 [2024-11-06 15:17:37.485081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.184 [2024-11-06 15:17:37.485239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.184 [2024-11-06 15:17:37.485241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:20.184 [2024-11-06 15:17:37.802661] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.184 15:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:20.184 15:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:20.446 [2024-11-06 15:17:38.208865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:20.446 15:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:20.707 15:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:20.707 Malloc0 00:05:20.707 15:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:20.968 Delay0 00:05:20.968 15:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.229 15:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:21.490 NULL1 00:05:21.491 15:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:21.491 15:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:21.491 15:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3559215 00:05:21.491 15:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:21.491 15:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.752 15:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.012 15:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:22.012 15:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:22.273 true 00:05:22.273 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:22.273 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.273 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.533 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:22.533 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:22.794 true 00:05:22.794 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:22.794 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.054 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.054 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:23.054 15:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:23.315 true 00:05:23.315 15:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:23.315 15:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.575 15:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.576 15:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:23.576 15:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:23.837 true 00:05:23.837 15:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:23.837 15:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.098 15:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.098 15:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:24.098 15:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:24.358 true 00:05:24.358 15:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:24.358 15:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.619 15:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.619 15:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:24.619 15:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:24.878 true 00:05:24.878 15:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:24.878 15:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.139 15:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.399 15:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:25.399 15:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:25.399 true 00:05:25.400 15:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:25.400 15:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.660 15:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.921 15:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:25.921 15:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:25.921 true 00:05:25.921 15:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:25.921 15:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.182 15:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.443 15:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:26.443 15:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:26.443 true 00:05:26.443 15:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:26.443 15:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.703 15:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.964 15:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:26.964 15:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:26.964 true 00:05:26.964 15:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:26.964 15:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.225 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.485 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:27.485 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:27.485 true 00:05:27.485 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:27.485 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.746 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.007 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:28.007 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:28.007 true 00:05:28.007 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:28.007 15:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.267 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.528 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:28.529 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:28.529 true 00:05:28.529 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:28.529 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.789 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.050 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:29.050 15:17:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:29.050 true 00:05:29.050 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:29.050 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.310 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.570 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:29.570 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:29.570 true 00:05:29.831 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:29.831 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.831 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.092 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:30.092 15:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:30.352 true 00:05:30.352 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:30.352 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.352 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.612 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:30.612 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:30.873 true 00:05:30.873 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:30.873 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.135 15:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.135 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:31.135 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:31.394 true 00:05:31.395 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:31.395 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.655 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.655 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:31.655 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:31.916 true 00:05:31.916 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:31.916 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.213 15:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.213 15:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:32.213 15:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:32.474 true 00:05:32.474 15:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:32.474 15:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.734 15:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.734 15:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:32.734 15:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:32.995 true 00:05:32.995 15:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:32.995 15:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.255 15:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.255 15:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:33.255 15:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:33.516 true 00:05:33.516 15:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:33.516 15:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.776 15:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.037 15:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:34.037 15:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:34.037 true 00:05:34.037 15:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:34.037 15:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.297 15:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.559 15:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:34.559 15:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:34.559 true 00:05:34.559 15:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:34.559 15:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.820 15:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.080 15:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:35.080 15:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:35.080 true 00:05:35.339 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:35.339 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.339 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.600 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:35.600 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:35.860 true 00:05:35.860 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:35.860 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.860 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.121 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:36.121 15:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:36.382 true 00:05:36.382 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:36.382 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.382 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.642 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:36.642 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:36.940 true 00:05:36.940 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:36.940 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.235 15:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.235 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:37.235 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:37.573 true 00:05:37.573 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:37.573 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.573 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.838 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:37.838 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:37.838 true 00:05:38.099 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:38.099 15:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.099 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.360 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:38.360 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:38.620 true 00:05:38.620 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:38.620 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.620 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.882 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:38.882 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:39.143 true 00:05:39.143 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:39.143 15:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.143 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.404 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:39.404 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:39.665 true 00:05:39.665 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:39.665 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.925 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.925 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:39.925 15:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:40.186 true 00:05:40.186 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:40.186 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.447 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.447 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:40.447 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:40.708 true 00:05:40.708 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:40.708 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.969 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.230 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:41.230 15:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:41.230 true 00:05:41.230 15:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:41.230 15:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.491 15:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.752 15:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:41.752 15:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:41.752 true 00:05:41.752 15:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:41.752 15:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.012 15:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.273 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:42.273 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:42.273 true 00:05:42.273 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:42.273 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.533 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.794 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:42.794 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:05:43.054 true 00:05:43.054 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:43.054 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.054 15:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.315 15:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:05:43.315 15:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:05:43.575 true 00:05:43.575 15:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:43.575 15:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.836 15:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.836 15:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:05:43.836 15:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:05:44.097 true 00:05:44.097 15:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:44.097 15:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.357 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.357 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:05:44.357 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:05:44.618 true 00:05:44.618 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:44.618 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.879 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.879 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:05:44.879 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:05:45.139 true 00:05:45.139 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:45.139 15:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.400 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.400 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:05:45.400 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:05:45.660 true 00:05:45.660 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:45.660 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.921 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.183 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:05:46.183 15:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:05:46.183 true 00:05:46.444 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:46.444 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.444 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.705 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:05:46.705 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:05:46.965 true 00:05:46.965 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:46.965 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.965 15:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.227 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:05:47.227 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:05:47.488 true 00:05:47.488 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:47.488 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.749 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.749 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:05:47.749 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:05:48.010 true 00:05:48.010 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:48.010 15:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.272 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.272 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:05:48.272 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:05:48.532 true 00:05:48.532 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:48.532 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.793 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.053 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:05:49.053 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:05:49.053 true 00:05:49.054 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:49.054 15:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.313 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.574 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:05:49.574 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:05:49.574 true 00:05:49.574 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:49.574 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.835 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.096 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:05:50.096 15:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:05:50.356 true 00:05:50.356 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:50.356 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.356 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.617 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:05:50.617 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:05:50.877 true 00:05:50.877 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:50.877 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.138 15:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.138 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:05:51.138 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:05:51.399 true 00:05:51.399 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:51.399 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.660 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.660 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:05:51.660 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:05:51.921 true 00:05:51.921 Initializing NVMe Controllers 00:05:51.921 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:51.921 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:05:51.921 Controller IO queue size 128, less than required. 00:05:51.921 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:51.921 WARNING: Some requested NVMe devices were skipped 00:05:51.921 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:51.921 Initialization complete. Launching workers. 00:05:51.921 ======================================================== 00:05:51.921 Latency(us) 00:05:51.921 Device Information : IOPS MiB/s Average min max 00:05:51.921 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30585.20 14.93 4184.98 1150.25 10794.60 00:05:51.921 ======================================================== 00:05:51.921 Total : 30585.20 14.93 4184.98 1150.25 10794.60 00:05:51.921 00:05:51.921 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3559215 00:05:51.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3559215) - No such process 00:05:51.921 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3559215 00:05:51.921 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.182 15:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:52.443 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:52.443 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:52.443 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:52.443 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:52.443 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:52.443 null0 00:05:52.443 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:52.443 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:52.443 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:52.704 null1 00:05:52.704 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:52.704 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:52.704 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:52.964 null2 00:05:52.964 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:52.964 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:52.964 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:52.964 null3 00:05:52.964 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:52.964 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:52.964 15:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:53.224 null4 00:05:53.224 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:53.224 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:53.224 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:53.483 null5 00:05:53.484 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:53.484 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:53.484 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:53.484 null6 00:05:53.484 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:53.484 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:53.484 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:53.745 null7 00:05:53.745 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:53.745 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:53.745 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:53.745 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3566328 3566329 3566331 3566333 3566335 3566337 3566340 3566342 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.746 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:54.007 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:54.007 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.007 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:54.007 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:54.007 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:54.007 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:54.007 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:54.007 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:54.007 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.007 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.007 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:54.268 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.268 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.268 15:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:54.268 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.268 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.268 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:54.268 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.268 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.268 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:54.269 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.269 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.269 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:54.269 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.269 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.269 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:54.269 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.269 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.269 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:54.269 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.269 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.269 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:54.269 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.269 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:54.269 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:54.269 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:54.269 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:54.531 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:54.793 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.793 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.793 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:54.793 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:54.793 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:54.793 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:54.793 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:54.793 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:54.793 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.793 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.793 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:54.793 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.793 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.793 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:54.793 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:54.793 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.793 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.793 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:55.054 15:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:55.054 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.054 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.054 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:55.054 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:55.055 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:55.315 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.315 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.315 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:55.315 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.315 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.315 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:55.315 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:55.315 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.315 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.315 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:55.315 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.315 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.316 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:55.316 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.316 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.316 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.316 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:55.316 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.316 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.316 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:55.316 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.577 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:55.838 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.838 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.838 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:55.838 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.838 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.838 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:55.838 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:55.838 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:55.838 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.838 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.839 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:55.839 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:55.839 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.839 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.839 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:55.839 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:55.839 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:55.839 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:55.839 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.839 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.839 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:56.099 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.099 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.099 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:56.099 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:56.099 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.099 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.100 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:56.100 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.100 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.100 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:56.100 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.100 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.100 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.100 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:56.100 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.100 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.100 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:56.100 15:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:56.100 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:56.100 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.100 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.100 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:56.100 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:56.360 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:56.361 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.621 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.621 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.621 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:56.621 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:56.621 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:56.621 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.622 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.622 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:56.622 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.622 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:56.622 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.622 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:56.622 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:56.622 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.622 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.622 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:56.622 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:56.622 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.622 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.622 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:56.882 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.882 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.882 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:56.882 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.882 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.882 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:56.882 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.882 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.882 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:56.882 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.882 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:56.882 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:56.882 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.882 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.882 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:56.882 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:56.882 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:56.882 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:57.146 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:57.146 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.146 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.146 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:57.146 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.146 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.146 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:57.146 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.146 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.146 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:57.146 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:57.146 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.146 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.146 15:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:57.146 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.146 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.146 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:57.146 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.146 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.146 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:57.146 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.146 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.146 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:57.146 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:57.146 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.146 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.146 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.407 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:57.407 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:57.407 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:57.407 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:57.407 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.407 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.407 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:57.407 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.407 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.407 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.407 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.407 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.407 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:57.667 rmmod nvme_tcp 00:05:57.667 rmmod nvme_fabrics 00:05:57.667 rmmod nvme_keyring 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3558676 ']' 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3558676 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 3558676 ']' 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 3558676 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3558676 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3558676' 00:05:57.667 killing process with pid 3558676 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 3558676 00:05:57.667 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 3558676 00:05:57.927 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:57.927 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:57.927 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:57.927 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:57.927 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:57.927 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:57.927 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:57.927 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:57.927 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:57.927 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:57.927 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:57.927 15:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:59.840 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:59.840 00:05:59.840 real 0m48.288s 00:05:59.840 user 3m16.881s 00:05:59.840 sys 0m17.684s 00:05:59.840 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:59.840 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:59.840 ************************************ 00:05:59.840 END TEST nvmf_ns_hotplug_stress 00:05:59.840 ************************************ 00:06:00.101 15:18:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:00.101 15:18:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:00.101 15:18:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.101 15:18:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:00.101 ************************************ 00:06:00.101 START TEST nvmf_delete_subsystem 00:06:00.101 ************************************ 00:06:00.101 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:00.101 * Looking for test storage... 00:06:00.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:00.101 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:00.101 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:00.101 15:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:00.101 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:00.101 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.101 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.101 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.101 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.101 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.101 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.101 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.101 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:00.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.102 --rc genhtml_branch_coverage=1 00:06:00.102 --rc genhtml_function_coverage=1 00:06:00.102 --rc genhtml_legend=1 00:06:00.102 --rc geninfo_all_blocks=1 00:06:00.102 --rc geninfo_unexecuted_blocks=1 00:06:00.102 00:06:00.102 ' 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:00.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.102 --rc genhtml_branch_coverage=1 00:06:00.102 --rc genhtml_function_coverage=1 00:06:00.102 --rc genhtml_legend=1 00:06:00.102 --rc geninfo_all_blocks=1 00:06:00.102 --rc geninfo_unexecuted_blocks=1 00:06:00.102 00:06:00.102 ' 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:00.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.102 --rc genhtml_branch_coverage=1 00:06:00.102 --rc genhtml_function_coverage=1 00:06:00.102 --rc genhtml_legend=1 00:06:00.102 --rc geninfo_all_blocks=1 00:06:00.102 --rc geninfo_unexecuted_blocks=1 00:06:00.102 00:06:00.102 ' 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:00.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.102 --rc genhtml_branch_coverage=1 00:06:00.102 --rc genhtml_function_coverage=1 00:06:00.102 --rc genhtml_legend=1 00:06:00.102 --rc geninfo_all_blocks=1 00:06:00.102 --rc geninfo_unexecuted_blocks=1 00:06:00.102 00:06:00.102 ' 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.102 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:00.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:00.364 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:00.365 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:00.365 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:00.365 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:00.365 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:00.365 15:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:08.510 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:08.510 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:08.511 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:08.511 Found net devices under 0000:31:00.0: cvl_0_0 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:08.511 Found net devices under 0000:31:00.1: cvl_0_1 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:08.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:08.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:06:08.511 00:06:08.511 --- 10.0.0.2 ping statistics --- 00:06:08.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:08.511 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:08.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:08.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:06:08.511 00:06:08.511 --- 10.0.0.1 ping statistics --- 00:06:08.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:08.511 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3571553 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3571553 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3571553 ']' 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:08.511 15:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:08.511 [2024-11-06 15:18:25.797331] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:06:08.512 [2024-11-06 15:18:25.797401] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:08.512 [2024-11-06 15:18:25.899229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.512 [2024-11-06 15:18:25.950179] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:08.512 [2024-11-06 15:18:25.950235] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:08.512 [2024-11-06 15:18:25.950243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:08.512 [2024-11-06 15:18:25.950251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:08.512 [2024-11-06 15:18:25.950257] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:08.512 [2024-11-06 15:18:25.952119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.512 [2024-11-06 15:18:25.952122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.773 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:08.773 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:06:08.773 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:08.773 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:08.773 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:08.773 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:08.773 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:08.773 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.773 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:08.773 [2024-11-06 15:18:26.676339] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.773 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.773 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:08.773 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.773 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:08.773 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.773 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:08.773 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.773 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:08.774 [2024-11-06 15:18:26.700670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:08.774 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.774 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:08.774 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.774 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:08.774 NULL1 00:06:08.774 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.774 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:08.774 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.774 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:08.774 Delay0 00:06:08.774 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.774 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.774 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.774 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:08.774 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.774 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3571841 00:06:08.774 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:08.774 15:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:09.035 [2024-11-06 15:18:26.827623] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:10.952 15:18:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:10.952 15:18:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.952 15:18:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.212 starting I/O failed: -6 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.212 Write completed with error (sct=0, sc=8) 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.212 starting I/O failed: -6 00:06:11.212 Write completed with error (sct=0, sc=8) 00:06:11.212 Write completed with error (sct=0, sc=8) 00:06:11.212 Write completed with error (sct=0, sc=8) 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.212 starting I/O failed: -6 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.212 Write completed with error (sct=0, sc=8) 00:06:11.212 Write completed with error (sct=0, sc=8) 00:06:11.212 starting I/O failed: -6 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.212 Write completed with error (sct=0, sc=8) 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.212 starting I/O failed: -6 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.212 starting I/O failed: -6 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.212 Write completed with error (sct=0, sc=8) 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.212 starting I/O failed: -6 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.212 Write completed with error (sct=0, sc=8) 00:06:11.212 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 starting I/O failed: -6 00:06:11.213 starting I/O failed: -6 00:06:11.213 starting I/O failed: -6 00:06:11.213 starting I/O failed: -6 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 Write completed with error (sct=0, sc=8) 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.213 starting I/O failed: -6 00:06:11.213 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 starting I/O failed: -6 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 starting I/O failed: -6 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 starting I/O failed: -6 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 starting I/O failed: -6 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 starting I/O failed: -6 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 starting I/O failed: -6 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 starting I/O failed: -6 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 starting I/O failed: -6 00:06:11.214 [2024-11-06 15:18:28.998347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f91b8000c40 is same with the state(6) to be set 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Read completed with error (sct=0, sc=8) 00:06:11.214 Write completed with error (sct=0, sc=8) 00:06:12.159 [2024-11-06 15:18:29.969493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160b5e0 is same with the state(6) to be set 00:06:12.159 Write completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Write completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Write completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Write completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Write completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Write completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Write completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Write completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Write completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 [2024-11-06 15:18:29.998198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160a4a0 is same with the state(6) to be set 00:06:12.159 Write completed with error (sct=0, sc=8) 00:06:12.159 Write completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Write completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Write completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Write completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Write completed with error (sct=0, sc=8) 00:06:12.159 Write completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Write completed with error (sct=0, sc=8) 00:06:12.159 Write completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.159 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Write completed with error (sct=0, sc=8) 00:06:12.160 Write completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Write completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Write completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 [2024-11-06 15:18:29.998473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160a0e0 is same with the state(6) to be set 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Write completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Write completed with error (sct=0, sc=8) 00:06:12.160 Write completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Write completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 [2024-11-06 15:18:29.999408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f91b800d020 is same with the state(6) to be set 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Write completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Write completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Write completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Read completed with error (sct=0, sc=8) 00:06:12.160 Write completed with error (sct=0, sc=8) 00:06:12.160 Write completed with error (sct=0, sc=8) 00:06:12.160 [2024-11-06 15:18:29.999470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f91b800d7e0 is same with the state(6) to be set 00:06:12.160 Initializing NVMe Controllers 00:06:12.160 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:12.160 Controller IO queue size 128, less than required. 00:06:12.160 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:12.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:12.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:12.160 Initialization complete. Launching workers. 00:06:12.160 ======================================================== 00:06:12.160 Latency(us) 00:06:12.160 Device Information : IOPS MiB/s Average min max 00:06:12.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 188.21 0.09 900114.05 419.33 1007458.66 00:06:12.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.86 0.07 956142.32 304.81 2000882.00 00:06:12.160 ======================================================== 00:06:12.160 Total : 341.07 0.17 925224.53 304.81 2000882.00 00:06:12.160 00:06:12.160 [2024-11-06 15:18:30.000038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x160b5e0 (9): Bad file descriptor 00:06:12.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:12.160 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.160 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:12.160 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3571841 00:06:12.160 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3571841 00:06:12.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3571841) - No such process 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3571841 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3571841 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3571841 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.731 [2024-11-06 15:18:30.535879] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3572580 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3572580 00:06:12.731 15:18:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:12.731 [2024-11-06 15:18:30.640812] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:13.303 15:18:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:13.303 15:18:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3572580 00:06:13.303 15:18:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:13.875 15:18:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:13.875 15:18:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3572580 00:06:13.875 15:18:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:14.136 15:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:14.136 15:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3572580 00:06:14.136 15:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:14.719 15:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:14.719 15:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3572580 00:06:14.719 15:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:15.292 15:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:15.292 15:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3572580 00:06:15.292 15:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:15.866 15:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:15.866 15:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3572580 00:06:15.866 15:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:15.866 Initializing NVMe Controllers 00:06:15.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:15.866 Controller IO queue size 128, less than required. 00:06:15.866 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:15.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:15.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:15.866 Initialization complete. Launching workers. 00:06:15.866 ======================================================== 00:06:15.866 Latency(us) 00:06:15.866 Device Information : IOPS MiB/s Average min max 00:06:15.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002617.86 1000125.34 1043529.85 00:06:15.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003585.54 1000256.79 1041633.90 00:06:15.866 ======================================================== 00:06:15.866 Total : 256.00 0.12 1003101.70 1000125.34 1043529.85 00:06:15.866 00:06:16.128 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:16.128 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3572580 00:06:16.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3572580) - No such process 00:06:16.128 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3572580 00:06:16.128 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:16.128 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:16.128 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:16.128 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:16.128 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:16.128 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:16.128 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:16.128 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:16.128 rmmod nvme_tcp 00:06:16.389 rmmod nvme_fabrics 00:06:16.389 rmmod nvme_keyring 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3571553 ']' 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3571553 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3571553 ']' 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3571553 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3571553 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3571553' 00:06:16.389 killing process with pid 3571553 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3571553 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3571553 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:16.389 15:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:18.937 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:18.937 00:06:18.937 real 0m18.539s 00:06:18.937 user 0m31.032s 00:06:18.937 sys 0m6.898s 00:06:18.937 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:18.937 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:18.937 ************************************ 00:06:18.937 END TEST nvmf_delete_subsystem 00:06:18.937 ************************************ 00:06:18.937 15:18:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:18.937 15:18:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:18.937 15:18:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:18.937 15:18:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:18.937 ************************************ 00:06:18.937 START TEST nvmf_host_management 00:06:18.937 ************************************ 00:06:18.937 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:18.937 * Looking for test storage... 00:06:18.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:18.937 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:18.937 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:18.937 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:18.937 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:18.937 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.937 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.937 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.937 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.937 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.937 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.937 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.937 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.937 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:18.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.938 --rc genhtml_branch_coverage=1 00:06:18.938 --rc genhtml_function_coverage=1 00:06:18.938 --rc genhtml_legend=1 00:06:18.938 --rc geninfo_all_blocks=1 00:06:18.938 --rc geninfo_unexecuted_blocks=1 00:06:18.938 00:06:18.938 ' 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:18.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.938 --rc genhtml_branch_coverage=1 00:06:18.938 --rc genhtml_function_coverage=1 00:06:18.938 --rc genhtml_legend=1 00:06:18.938 --rc geninfo_all_blocks=1 00:06:18.938 --rc geninfo_unexecuted_blocks=1 00:06:18.938 00:06:18.938 ' 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:18.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.938 --rc genhtml_branch_coverage=1 00:06:18.938 --rc genhtml_function_coverage=1 00:06:18.938 --rc genhtml_legend=1 00:06:18.938 --rc geninfo_all_blocks=1 00:06:18.938 --rc geninfo_unexecuted_blocks=1 00:06:18.938 00:06:18.938 ' 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:18.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.938 --rc genhtml_branch_coverage=1 00:06:18.938 --rc genhtml_function_coverage=1 00:06:18.938 --rc genhtml_legend=1 00:06:18.938 --rc geninfo_all_blocks=1 00:06:18.938 --rc geninfo_unexecuted_blocks=1 00:06:18.938 00:06:18.938 ' 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:18.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:18.938 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:18.939 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:18.939 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:18.939 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:18.939 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:18.939 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:18.939 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:18.939 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:18.939 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:18.939 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.939 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:18.939 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:18.939 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:18.939 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:18.939 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:18.939 15:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:27.233 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:27.233 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:27.233 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:27.234 Found net devices under 0000:31:00.0: cvl_0_0 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:27.234 Found net devices under 0000:31:00.1: cvl_0_1 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:27.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:27.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:06:27.234 00:06:27.234 --- 10.0.0.2 ping statistics --- 00:06:27.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.234 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:27.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:27.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:06:27.234 00:06:27.234 --- 10.0.0.1 ping statistics --- 00:06:27.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.234 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3577638 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3577638 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3577638 ']' 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:27.234 15:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:27.234 [2024-11-06 15:18:44.457550] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:06:27.234 [2024-11-06 15:18:44.457616] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.234 [2024-11-06 15:18:44.556710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.234 [2024-11-06 15:18:44.610396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:27.234 [2024-11-06 15:18:44.610447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:27.234 [2024-11-06 15:18:44.610455] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:27.234 [2024-11-06 15:18:44.610462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:27.234 [2024-11-06 15:18:44.610469] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:27.234 [2024-11-06 15:18:44.612516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.234 [2024-11-06 15:18:44.612675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.234 [2024-11-06 15:18:44.612835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:27.234 [2024-11-06 15:18:44.612835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.496 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:27.496 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:27.496 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:27.496 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:27.496 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:27.497 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:27.497 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:27.497 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.497 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:27.497 [2024-11-06 15:18:45.338368] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:27.497 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.497 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:27.497 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:27.497 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:27.497 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:27.497 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:27.497 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:27.497 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.497 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:27.497 Malloc0 00:06:27.497 [2024-11-06 15:18:45.427489] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:27.497 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.497 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:27.497 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:27.497 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:27.758 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3577813 00:06:27.758 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3577813 /var/tmp/bdevperf.sock 00:06:27.758 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3577813 ']' 00:06:27.758 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:27.758 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:27.758 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:27.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:27.758 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:27.758 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:27.758 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:27.758 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:27.758 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:27.758 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:27.758 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:27.758 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:27.758 { 00:06:27.758 "params": { 00:06:27.758 "name": "Nvme$subsystem", 00:06:27.758 "trtype": "$TEST_TRANSPORT", 00:06:27.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:27.758 "adrfam": "ipv4", 00:06:27.758 "trsvcid": "$NVMF_PORT", 00:06:27.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:27.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:27.758 "hdgst": ${hdgst:-false}, 00:06:27.759 "ddgst": ${ddgst:-false} 00:06:27.759 }, 00:06:27.759 "method": "bdev_nvme_attach_controller" 00:06:27.759 } 00:06:27.759 EOF 00:06:27.759 )") 00:06:27.759 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:27.759 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:27.759 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:27.759 15:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:27.759 "params": { 00:06:27.759 "name": "Nvme0", 00:06:27.759 "trtype": "tcp", 00:06:27.759 "traddr": "10.0.0.2", 00:06:27.759 "adrfam": "ipv4", 00:06:27.759 "trsvcid": "4420", 00:06:27.759 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:27.759 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:27.759 "hdgst": false, 00:06:27.759 "ddgst": false 00:06:27.759 }, 00:06:27.759 "method": "bdev_nvme_attach_controller" 00:06:27.759 }' 00:06:27.759 [2024-11-06 15:18:45.539593] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:06:27.759 [2024-11-06 15:18:45.539666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3577813 ] 00:06:27.759 [2024-11-06 15:18:45.635751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.759 [2024-11-06 15:18:45.689818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.332 Running I/O for 10 seconds... 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.596 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:28.596 [2024-11-06 15:18:46.447549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e94910 is same with the state(6) to be set 00:06:28.596 [2024-11-06 15:18:46.447811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.596 [2024-11-06 15:18:46.447873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.596 [2024-11-06 15:18:46.447905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.596 [2024-11-06 15:18:46.447914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.596 [2024-11-06 15:18:46.447924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.596 [2024-11-06 15:18:46.447932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.596 [2024-11-06 15:18:46.447942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.596 [2024-11-06 15:18:46.447950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.596 [2024-11-06 15:18:46.447959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.596 [2024-11-06 15:18:46.447967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.596 [2024-11-06 15:18:46.447976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.596 [2024-11-06 15:18:46.447984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.596 [2024-11-06 15:18:46.447994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.596 [2024-11-06 15:18:46.448001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.596 [2024-11-06 15:18:46.448011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.596 [2024-11-06 15:18:46.448018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.596 [2024-11-06 15:18:46.448028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.596 [2024-11-06 15:18:46.448036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.596 [2024-11-06 15:18:46.448045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.596 [2024-11-06 15:18:46.448053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.596 [2024-11-06 15:18:46.448062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.596 [2024-11-06 15:18:46.448070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.596 [2024-11-06 15:18:46.448079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.596 [2024-11-06 15:18:46.448087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.596 [2024-11-06 15:18:46.448097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.596 [2024-11-06 15:18:46.448104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.596 [2024-11-06 15:18:46.448114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.596 [2024-11-06 15:18:46.448124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.596 [2024-11-06 15:18:46.448133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.596 [2024-11-06 15:18:46.448141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.597 [2024-11-06 15:18:46.448775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.597 [2024-11-06 15:18:46.448782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.598 [2024-11-06 15:18:46.448793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.598 [2024-11-06 15:18:46.448801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.598 [2024-11-06 15:18:46.448811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.598 [2024-11-06 15:18:46.448818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.598 [2024-11-06 15:18:46.448827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.598 [2024-11-06 15:18:46.448834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.598 [2024-11-06 15:18:46.448844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.598 [2024-11-06 15:18:46.448851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.598 [2024-11-06 15:18:46.448860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.598 [2024-11-06 15:18:46.448867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.598 [2024-11-06 15:18:46.448877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.598 [2024-11-06 15:18:46.448884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.598 [2024-11-06 15:18:46.448893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.598 [2024-11-06 15:18:46.448901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.598 [2024-11-06 15:18:46.448910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.598 [2024-11-06 15:18:46.448918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.598 [2024-11-06 15:18:46.448929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.598 [2024-11-06 15:18:46.448937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.598 [2024-11-06 15:18:46.448947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.598 [2024-11-06 15:18:46.448954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.598 [2024-11-06 15:18:46.448964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.598 [2024-11-06 15:18:46.448971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.598 [2024-11-06 15:18:46.448981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.598 [2024-11-06 15:18:46.448990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.598 [2024-11-06 15:18:46.449000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.598 [2024-11-06 15:18:46.449010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.598 [2024-11-06 15:18:46.449153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.598 [2024-11-06 15:18:46.449167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.598 [2024-11-06 15:18:46.449176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.598 [2024-11-06 15:18:46.449183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.598 [2024-11-06 15:18:46.449192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.598 [2024-11-06 15:18:46.449199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.598 [2024-11-06 15:18:46.449208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.598 [2024-11-06 15:18:46.449216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.598 [2024-11-06 15:18:46.449224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5c280 is same with the state(6) to be set 00:06:28.598 [2024-11-06 15:18:46.450448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:28.598 task offset: 79872 on job bdev=Nvme0n1 fails 00:06:28.598 00:06:28.598 Latency(us) 00:06:28.598 [2024-11-06T14:18:46.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:28.598 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:28.598 Job: Nvme0n1 ended in about 0.42 seconds with error 00:06:28.598 Verification LBA range: start 0x0 length 0x400 00:06:28.598 Nvme0n1 : 0.42 1362.87 85.18 151.43 0.00 41013.27 1966.08 37137.07 00:06:28.598 [2024-11-06T14:18:46.581Z] =================================================================================================================== 00:06:28.598 [2024-11-06T14:18:46.581Z] Total : 1362.87 85.18 151.43 0.00 41013.27 1966.08 37137.07 00:06:28.598 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.598 [2024-11-06 15:18:46.452653] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:28.598 [2024-11-06 15:18:46.452691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5c280 (9): Bad file descriptor 00:06:28.598 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:28.598 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.598 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:28.598 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.598 15:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:28.860 [2024-11-06 15:18:46.595991] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:29.803 15:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3577813 00:06:29.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3577813) - No such process 00:06:29.803 15:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:29.803 15:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:29.803 15:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:29.803 15:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:29.803 15:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:29.803 15:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:29.803 15:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:29.803 15:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:29.803 { 00:06:29.803 "params": { 00:06:29.803 "name": "Nvme$subsystem", 00:06:29.803 "trtype": "$TEST_TRANSPORT", 00:06:29.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:29.803 "adrfam": "ipv4", 00:06:29.803 "trsvcid": "$NVMF_PORT", 00:06:29.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:29.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:29.803 "hdgst": ${hdgst:-false}, 00:06:29.803 "ddgst": ${ddgst:-false} 00:06:29.803 }, 00:06:29.803 "method": "bdev_nvme_attach_controller" 00:06:29.803 } 00:06:29.803 EOF 00:06:29.803 )") 00:06:29.803 15:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:29.803 15:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:29.803 15:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:29.803 15:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:29.803 "params": { 00:06:29.803 "name": "Nvme0", 00:06:29.803 "trtype": "tcp", 00:06:29.803 "traddr": "10.0.0.2", 00:06:29.803 "adrfam": "ipv4", 00:06:29.803 "trsvcid": "4420", 00:06:29.803 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:29.803 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:29.803 "hdgst": false, 00:06:29.803 "ddgst": false 00:06:29.803 }, 00:06:29.803 "method": "bdev_nvme_attach_controller" 00:06:29.803 }' 00:06:29.803 [2024-11-06 15:18:47.526662] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:06:29.803 [2024-11-06 15:18:47.526716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3578288 ] 00:06:29.803 [2024-11-06 15:18:47.616379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.803 [2024-11-06 15:18:47.651086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.064 Running I/O for 1 seconds... 00:06:31.006 1611.00 IOPS, 100.69 MiB/s 00:06:31.006 Latency(us) 00:06:31.006 [2024-11-06T14:18:48.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:31.006 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:31.006 Verification LBA range: start 0x0 length 0x400 00:06:31.006 Nvme0n1 : 1.01 1660.16 103.76 0.00 0.00 37742.88 1631.57 32768.00 00:06:31.006 [2024-11-06T14:18:48.989Z] =================================================================================================================== 00:06:31.006 [2024-11-06T14:18:48.989Z] Total : 1660.16 103.76 0.00 0.00 37742.88 1631.57 32768.00 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:31.267 rmmod nvme_tcp 00:06:31.267 rmmod nvme_fabrics 00:06:31.267 rmmod nvme_keyring 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3577638 ']' 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3577638 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 3577638 ']' 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 3577638 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3577638 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3577638' 00:06:31.267 killing process with pid 3577638 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 3577638 00:06:31.267 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 3577638 00:06:31.529 [2024-11-06 15:18:49.295301] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:31.529 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:31.529 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:31.529 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:31.529 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:31.529 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:31.529 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:31.529 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:31.529 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:31.529 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:31.529 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.529 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:31.529 15:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.444 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:33.444 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:33.444 00:06:33.444 real 0m14.912s 00:06:33.444 user 0m23.764s 00:06:33.444 sys 0m6.871s 00:06:33.444 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:33.444 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.444 ************************************ 00:06:33.444 END TEST nvmf_host_management 00:06:33.444 ************************************ 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:33.706 ************************************ 00:06:33.706 START TEST nvmf_lvol 00:06:33.706 ************************************ 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:33.706 * Looking for test storage... 00:06:33.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.706 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:33.969 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.969 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:33.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.969 --rc genhtml_branch_coverage=1 00:06:33.969 --rc genhtml_function_coverage=1 00:06:33.969 --rc genhtml_legend=1 00:06:33.969 --rc geninfo_all_blocks=1 00:06:33.969 --rc geninfo_unexecuted_blocks=1 00:06:33.969 00:06:33.969 ' 00:06:33.969 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:33.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.969 --rc genhtml_branch_coverage=1 00:06:33.969 --rc genhtml_function_coverage=1 00:06:33.969 --rc genhtml_legend=1 00:06:33.969 --rc geninfo_all_blocks=1 00:06:33.969 --rc geninfo_unexecuted_blocks=1 00:06:33.969 00:06:33.969 ' 00:06:33.969 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:33.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.969 --rc genhtml_branch_coverage=1 00:06:33.969 --rc genhtml_function_coverage=1 00:06:33.969 --rc genhtml_legend=1 00:06:33.969 --rc geninfo_all_blocks=1 00:06:33.969 --rc geninfo_unexecuted_blocks=1 00:06:33.969 00:06:33.969 ' 00:06:33.969 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:33.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.969 --rc genhtml_branch_coverage=1 00:06:33.969 --rc genhtml_function_coverage=1 00:06:33.969 --rc genhtml_legend=1 00:06:33.969 --rc geninfo_all_blocks=1 00:06:33.969 --rc geninfo_unexecuted_blocks=1 00:06:33.969 00:06:33.969 ' 00:06:33.969 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.969 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:33.969 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.969 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.969 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.969 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.969 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.969 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.969 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.969 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.969 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.969 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:33.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:33.970 15:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:42.117 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.117 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:42.117 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:42.117 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:42.117 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:42.117 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:42.118 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:42.118 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:42.118 Found net devices under 0000:31:00.0: cvl_0_0 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:42.118 Found net devices under 0000:31:00.1: cvl_0_1 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.118 15:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.118 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.118 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.118 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:42.118 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.118 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:42.118 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:42.118 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:42.118 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:42.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:06:42.118 00:06:42.118 --- 10.0.0.2 ping statistics --- 00:06:42.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.118 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:06:42.118 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:42.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:06:42.118 00:06:42.118 --- 10.0.0.1 ping statistics --- 00:06:42.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.118 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:06:42.118 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.118 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:42.118 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:42.118 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.118 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:42.118 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:42.119 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.119 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:42.119 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:42.119 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:42.119 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:42.119 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:42.119 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:42.119 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3582874 00:06:42.119 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3582874 00:06:42.119 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:42.119 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 3582874 ']' 00:06:42.119 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.119 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:42.119 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.119 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:42.119 15:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:42.119 [2024-11-06 15:18:59.402363] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:06:42.119 [2024-11-06 15:18:59.402430] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.119 [2024-11-06 15:18:59.503231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.119 [2024-11-06 15:18:59.557006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.119 [2024-11-06 15:18:59.557056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.119 [2024-11-06 15:18:59.557064] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:42.119 [2024-11-06 15:18:59.557071] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:42.119 [2024-11-06 15:18:59.557077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.119 [2024-11-06 15:18:59.558882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.119 [2024-11-06 15:18:59.559085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.119 [2024-11-06 15:18:59.559086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.380 15:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:42.380 15:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:06:42.380 15:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:42.380 15:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:42.380 15:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:42.380 15:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:42.380 15:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:42.641 [2024-11-06 15:19:00.444779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.641 15:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:42.902 15:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:42.902 15:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:43.163 15:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:43.164 15:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:43.164 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:43.424 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=279c59ad-6a35-4ce6-97aa-8365f0c76762 00:06:43.424 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 279c59ad-6a35-4ce6-97aa-8365f0c76762 lvol 20 00:06:43.685 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ce01292e-fb1e-44dd-9cc3-4c4e3104d7be 00:06:43.685 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:43.945 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ce01292e-fb1e-44dd-9cc3-4c4e3104d7be 00:06:43.945 15:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:44.205 [2024-11-06 15:19:02.031609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:44.205 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:44.465 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3583472 00:06:44.465 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:44.465 15:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:45.408 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ce01292e-fb1e-44dd-9cc3-4c4e3104d7be MY_SNAPSHOT 00:06:45.669 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8fabbdf9-dddb-4cd3-8adc-c53aa74fc7fd 00:06:45.669 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ce01292e-fb1e-44dd-9cc3-4c4e3104d7be 30 00:06:45.929 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8fabbdf9-dddb-4cd3-8adc-c53aa74fc7fd MY_CLONE 00:06:45.929 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=eb2e234c-9fa2-4903-8132-1007224aa046 00:06:45.929 15:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate eb2e234c-9fa2-4903-8132-1007224aa046 00:06:46.500 15:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3583472 00:06:54.640 Initializing NVMe Controllers 00:06:54.640 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:54.640 Controller IO queue size 128, less than required. 00:06:54.640 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:54.640 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:54.640 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:54.640 Initialization complete. Launching workers. 00:06:54.640 ======================================================== 00:06:54.640 Latency(us) 00:06:54.640 Device Information : IOPS MiB/s Average min max 00:06:54.640 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15974.30 62.40 8016.14 1512.97 59911.97 00:06:54.640 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17324.80 67.67 7389.25 1005.54 50920.17 00:06:54.640 ======================================================== 00:06:54.640 Total : 33299.10 130.07 7689.98 1005.54 59911.97 00:06:54.640 00:06:54.640 15:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:54.901 15:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ce01292e-fb1e-44dd-9cc3-4c4e3104d7be 00:06:54.901 15:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 279c59ad-6a35-4ce6-97aa-8365f0c76762 00:06:55.162 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:55.162 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:55.162 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:55.162 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:55.162 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:55.162 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:55.162 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:55.162 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:55.162 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:55.162 rmmod nvme_tcp 00:06:55.162 rmmod nvme_fabrics 00:06:55.162 rmmod nvme_keyring 00:06:55.162 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:55.162 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:55.162 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:55.162 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3582874 ']' 00:06:55.162 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3582874 00:06:55.162 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 3582874 ']' 00:06:55.162 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 3582874 00:06:55.162 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:06:55.162 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:55.162 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3582874 00:06:55.422 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:55.422 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:55.423 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3582874' 00:06:55.423 killing process with pid 3582874 00:06:55.423 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 3582874 00:06:55.423 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 3582874 00:06:55.423 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:55.423 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:55.423 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:55.423 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:55.423 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:55.423 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:55.423 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:55.423 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:55.423 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:55.423 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.423 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:55.423 15:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.966 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:57.966 00:06:57.966 real 0m23.904s 00:06:57.966 user 1m4.286s 00:06:57.966 sys 0m8.705s 00:06:57.966 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:57.966 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:57.966 ************************************ 00:06:57.966 END TEST nvmf_lvol 00:06:57.966 ************************************ 00:06:57.966 15:19:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:57.966 15:19:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:57.966 15:19:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:57.966 15:19:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:57.966 ************************************ 00:06:57.966 START TEST nvmf_lvs_grow 00:06:57.966 ************************************ 00:06:57.966 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:57.967 * Looking for test storage... 00:06:57.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:57.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.967 --rc genhtml_branch_coverage=1 00:06:57.967 --rc genhtml_function_coverage=1 00:06:57.967 --rc genhtml_legend=1 00:06:57.967 --rc geninfo_all_blocks=1 00:06:57.967 --rc geninfo_unexecuted_blocks=1 00:06:57.967 00:06:57.967 ' 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:57.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.967 --rc genhtml_branch_coverage=1 00:06:57.967 --rc genhtml_function_coverage=1 00:06:57.967 --rc genhtml_legend=1 00:06:57.967 --rc geninfo_all_blocks=1 00:06:57.967 --rc geninfo_unexecuted_blocks=1 00:06:57.967 00:06:57.967 ' 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:57.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.967 --rc genhtml_branch_coverage=1 00:06:57.967 --rc genhtml_function_coverage=1 00:06:57.967 --rc genhtml_legend=1 00:06:57.967 --rc geninfo_all_blocks=1 00:06:57.967 --rc geninfo_unexecuted_blocks=1 00:06:57.967 00:06:57.967 ' 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:57.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.967 --rc genhtml_branch_coverage=1 00:06:57.967 --rc genhtml_function_coverage=1 00:06:57.967 --rc genhtml_legend=1 00:06:57.967 --rc geninfo_all_blocks=1 00:06:57.967 --rc geninfo_unexecuted_blocks=1 00:06:57.967 00:06:57.967 ' 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:57.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:57.967 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:57.968 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:57.968 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.968 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:57.968 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:57.968 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:57.968 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.968 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.968 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.968 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:57.968 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:57.968 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:57.968 15:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:06.116 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:06.116 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:06.116 Found net devices under 0000:31:00.0: cvl_0_0 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:06.116 Found net devices under 0000:31:00.1: cvl_0_1 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:06.116 15:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:06.116 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:06.116 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:06.116 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:06.116 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:06.116 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:06.116 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:06.116 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:06.116 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:06.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:07:06.116 00:07:06.116 --- 10.0.0.2 ping statistics --- 00:07:06.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.116 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:06.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:07:06.117 00:07:06.117 --- 10.0.0.1 ping statistics --- 00:07:06.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.117 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3590011 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3590011 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 3590011 ']' 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:06.117 15:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:06.117 [2024-11-06 15:19:23.424475] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:07:06.117 [2024-11-06 15:19:23.424537] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.117 [2024-11-06 15:19:23.523109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.117 [2024-11-06 15:19:23.574249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.117 [2024-11-06 15:19:23.574298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.117 [2024-11-06 15:19:23.574307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.117 [2024-11-06 15:19:23.574314] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.117 [2024-11-06 15:19:23.574320] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.117 [2024-11-06 15:19:23.575129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.378 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:06.378 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:07:06.378 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:06.378 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:06.378 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:06.378 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.378 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:06.640 [2024-11-06 15:19:24.452388] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.640 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:06.640 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:06.640 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:06.640 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:06.640 ************************************ 00:07:06.640 START TEST lvs_grow_clean 00:07:06.640 ************************************ 00:07:06.640 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:07:06.640 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:06.640 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:06.640 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:06.640 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:06.640 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:06.640 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:06.640 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:06.640 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:06.640 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:06.902 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:06.902 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:07.163 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=037a3b85-6d7d-430e-8e0e-de23788ad285 00:07:07.163 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 037a3b85-6d7d-430e-8e0e-de23788ad285 00:07:07.163 15:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:07.163 15:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:07.163 15:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:07.163 15:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 037a3b85-6d7d-430e-8e0e-de23788ad285 lvol 150 00:07:07.424 15:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=61e0de58-162f-4cf2-b0a9-48e1879e71ed 00:07:07.424 15:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:07.424 15:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:07.685 [2024-11-06 15:19:25.482582] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:07.686 [2024-11-06 15:19:25.482654] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:07.686 true 00:07:07.686 15:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 037a3b85-6d7d-430e-8e0e-de23788ad285 00:07:07.686 15:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:07.947 15:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:07.947 15:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:07.947 15:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 61e0de58-162f-4cf2-b0a9-48e1879e71ed 00:07:08.208 15:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:08.208 [2024-11-06 15:19:26.184776] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.469 15:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:08.469 15:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:08.469 15:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3590589 00:07:08.469 15:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:08.469 15:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3590589 /var/tmp/bdevperf.sock 00:07:08.469 15:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 3590589 ']' 00:07:08.469 15:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:08.469 15:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:08.469 15:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:08.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:08.469 15:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:08.470 15:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:08.470 [2024-11-06 15:19:26.402154] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:07:08.470 [2024-11-06 15:19:26.402215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3590589 ] 00:07:08.731 [2024-11-06 15:19:26.496159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.731 [2024-11-06 15:19:26.549427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.302 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:09.302 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:07:09.302 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:09.874 Nvme0n1 00:07:09.875 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:09.875 [ 00:07:09.875 { 00:07:09.875 "name": "Nvme0n1", 00:07:09.875 "aliases": [ 00:07:09.875 "61e0de58-162f-4cf2-b0a9-48e1879e71ed" 00:07:09.875 ], 00:07:09.875 "product_name": "NVMe disk", 00:07:09.875 "block_size": 4096, 00:07:09.875 "num_blocks": 38912, 00:07:09.875 "uuid": "61e0de58-162f-4cf2-b0a9-48e1879e71ed", 00:07:09.875 "numa_id": 0, 00:07:09.875 "assigned_rate_limits": { 00:07:09.875 "rw_ios_per_sec": 0, 00:07:09.875 "rw_mbytes_per_sec": 0, 00:07:09.875 "r_mbytes_per_sec": 0, 00:07:09.875 "w_mbytes_per_sec": 0 00:07:09.875 }, 00:07:09.875 "claimed": false, 00:07:09.875 "zoned": false, 00:07:09.875 "supported_io_types": { 00:07:09.875 "read": true, 00:07:09.875 "write": true, 00:07:09.875 "unmap": true, 00:07:09.875 "flush": true, 00:07:09.875 "reset": true, 00:07:09.875 "nvme_admin": true, 00:07:09.875 "nvme_io": true, 00:07:09.875 "nvme_io_md": false, 00:07:09.875 "write_zeroes": true, 00:07:09.875 "zcopy": false, 00:07:09.875 "get_zone_info": false, 00:07:09.875 "zone_management": false, 00:07:09.875 "zone_append": false, 00:07:09.875 "compare": true, 00:07:09.875 "compare_and_write": true, 00:07:09.875 "abort": true, 00:07:09.875 "seek_hole": false, 00:07:09.875 "seek_data": false, 00:07:09.875 "copy": true, 00:07:09.875 "nvme_iov_md": false 00:07:09.875 }, 00:07:09.875 "memory_domains": [ 00:07:09.875 { 00:07:09.875 "dma_device_id": "system", 00:07:09.875 "dma_device_type": 1 00:07:09.875 } 00:07:09.875 ], 00:07:09.875 "driver_specific": { 00:07:09.875 "nvme": [ 00:07:09.875 { 00:07:09.875 "trid": { 00:07:09.875 "trtype": "TCP", 00:07:09.875 "adrfam": "IPv4", 00:07:09.875 "traddr": "10.0.0.2", 00:07:09.875 "trsvcid": "4420", 00:07:09.875 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:09.875 }, 00:07:09.875 "ctrlr_data": { 00:07:09.875 "cntlid": 1, 00:07:09.875 "vendor_id": "0x8086", 00:07:09.875 "model_number": "SPDK bdev Controller", 00:07:09.875 "serial_number": "SPDK0", 00:07:09.875 "firmware_revision": "25.01", 00:07:09.875 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:09.875 "oacs": { 00:07:09.875 "security": 0, 00:07:09.875 "format": 0, 00:07:09.875 "firmware": 0, 00:07:09.875 "ns_manage": 0 00:07:09.875 }, 00:07:09.875 "multi_ctrlr": true, 00:07:09.875 "ana_reporting": false 00:07:09.875 }, 00:07:09.875 "vs": { 00:07:09.875 "nvme_version": "1.3" 00:07:09.875 }, 00:07:09.875 "ns_data": { 00:07:09.875 "id": 1, 00:07:09.875 "can_share": true 00:07:09.875 } 00:07:09.875 } 00:07:09.875 ], 00:07:09.875 "mp_policy": "active_passive" 00:07:09.875 } 00:07:09.875 } 00:07:09.875 ] 00:07:09.875 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3590925 00:07:09.875 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:09.875 15:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:10.135 Running I/O for 10 seconds... 00:07:11.078 Latency(us) 00:07:11.078 [2024-11-06T14:19:29.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.078 Nvme0n1 : 1.00 24965.00 97.52 0.00 0.00 0.00 0.00 0.00 00:07:11.078 [2024-11-06T14:19:29.061Z] =================================================================================================================== 00:07:11.078 [2024-11-06T14:19:29.061Z] Total : 24965.00 97.52 0.00 0.00 0.00 0.00 0.00 00:07:11.078 00:07:12.020 15:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 037a3b85-6d7d-430e-8e0e-de23788ad285 00:07:12.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.020 Nvme0n1 : 2.00 25161.50 98.29 0.00 0.00 0.00 0.00 0.00 00:07:12.020 [2024-11-06T14:19:30.003Z] =================================================================================================================== 00:07:12.020 [2024-11-06T14:19:30.003Z] Total : 25161.50 98.29 0.00 0.00 0.00 0.00 0.00 00:07:12.020 00:07:12.282 true 00:07:12.282 15:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 037a3b85-6d7d-430e-8e0e-de23788ad285 00:07:12.282 15:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:12.282 15:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:12.282 15:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:12.282 15:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3590925 00:07:13.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.225 Nvme0n1 : 3.00 25133.33 98.18 0.00 0.00 0.00 0.00 0.00 00:07:13.225 [2024-11-06T14:19:31.208Z] =================================================================================================================== 00:07:13.225 [2024-11-06T14:19:31.208Z] Total : 25133.33 98.18 0.00 0.00 0.00 0.00 0.00 00:07:13.225 00:07:14.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.166 Nvme0n1 : 4.00 25221.25 98.52 0.00 0.00 0.00 0.00 0.00 00:07:14.166 [2024-11-06T14:19:32.150Z] =================================================================================================================== 00:07:14.167 [2024-11-06T14:19:32.150Z] Total : 25221.25 98.52 0.00 0.00 0.00 0.00 0.00 00:07:14.167 00:07:15.108 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.108 Nvme0n1 : 5.00 25270.80 98.71 0.00 0.00 0.00 0.00 0.00 00:07:15.108 [2024-11-06T14:19:33.091Z] =================================================================================================================== 00:07:15.108 [2024-11-06T14:19:33.091Z] Total : 25270.80 98.71 0.00 0.00 0.00 0.00 0.00 00:07:15.108 00:07:16.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.050 Nvme0n1 : 6.00 25303.33 98.84 0.00 0.00 0.00 0.00 0.00 00:07:16.050 [2024-11-06T14:19:34.033Z] =================================================================================================================== 00:07:16.050 [2024-11-06T14:19:34.033Z] Total : 25303.33 98.84 0.00 0.00 0.00 0.00 0.00 00:07:16.050 00:07:16.993 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.993 Nvme0n1 : 7.00 25328.57 98.94 0.00 0.00 0.00 0.00 0.00 00:07:16.993 [2024-11-06T14:19:34.976Z] =================================================================================================================== 00:07:16.993 [2024-11-06T14:19:34.976Z] Total : 25328.57 98.94 0.00 0.00 0.00 0.00 0.00 00:07:16.993 00:07:18.377 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.377 Nvme0n1 : 8.00 25351.75 99.03 0.00 0.00 0.00 0.00 0.00 00:07:18.377 [2024-11-06T14:19:36.360Z] =================================================================================================================== 00:07:18.377 [2024-11-06T14:19:36.360Z] Total : 25351.75 99.03 0.00 0.00 0.00 0.00 0.00 00:07:18.377 00:07:19.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.319 Nvme0n1 : 9.00 25365.00 99.08 0.00 0.00 0.00 0.00 0.00 00:07:19.319 [2024-11-06T14:19:37.302Z] =================================================================================================================== 00:07:19.319 [2024-11-06T14:19:37.302Z] Total : 25365.00 99.08 0.00 0.00 0.00 0.00 0.00 00:07:19.319 00:07:20.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.260 Nvme0n1 : 10.00 25382.00 99.15 0.00 0.00 0.00 0.00 0.00 00:07:20.260 [2024-11-06T14:19:38.243Z] =================================================================================================================== 00:07:20.260 [2024-11-06T14:19:38.243Z] Total : 25382.00 99.15 0.00 0.00 0.00 0.00 0.00 00:07:20.260 00:07:20.260 00:07:20.260 Latency(us) 00:07:20.260 [2024-11-06T14:19:38.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.260 Nvme0n1 : 10.00 25383.71 99.16 0.00 0.00 5039.20 2252.80 12724.91 00:07:20.260 [2024-11-06T14:19:38.243Z] =================================================================================================================== 00:07:20.260 [2024-11-06T14:19:38.243Z] Total : 25383.71 99.16 0.00 0.00 5039.20 2252.80 12724.91 00:07:20.260 { 00:07:20.260 "results": [ 00:07:20.260 { 00:07:20.260 "job": "Nvme0n1", 00:07:20.260 "core_mask": "0x2", 00:07:20.260 "workload": "randwrite", 00:07:20.260 "status": "finished", 00:07:20.260 "queue_depth": 128, 00:07:20.260 "io_size": 4096, 00:07:20.260 "runtime": 10.004367, 00:07:20.260 "iops": 25383.71493168933, 00:07:20.260 "mibps": 99.15513645191145, 00:07:20.260 "io_failed": 0, 00:07:20.260 "io_timeout": 0, 00:07:20.260 "avg_latency_us": 5039.199184294948, 00:07:20.260 "min_latency_us": 2252.8, 00:07:20.260 "max_latency_us": 12724.906666666666 00:07:20.260 } 00:07:20.260 ], 00:07:20.260 "core_count": 1 00:07:20.260 } 00:07:20.260 15:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3590589 00:07:20.260 15:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 3590589 ']' 00:07:20.260 15:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 3590589 00:07:20.260 15:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:20.260 15:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:20.260 15:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3590589 00:07:20.260 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:20.260 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:20.260 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3590589' 00:07:20.260 killing process with pid 3590589 00:07:20.260 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 3590589 00:07:20.260 Received shutdown signal, test time was about 10.000000 seconds 00:07:20.260 00:07:20.260 Latency(us) 00:07:20.260 [2024-11-06T14:19:38.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.260 [2024-11-06T14:19:38.243Z] =================================================================================================================== 00:07:20.260 [2024-11-06T14:19:38.243Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:20.260 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 3590589 00:07:20.260 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:20.520 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:20.780 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 037a3b85-6d7d-430e-8e0e-de23788ad285 00:07:20.780 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:20.780 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:20.780 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:20.780 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:21.041 [2024-11-06 15:19:38.836089] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:21.041 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 037a3b85-6d7d-430e-8e0e-de23788ad285 00:07:21.041 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:21.041 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 037a3b85-6d7d-430e-8e0e-de23788ad285 00:07:21.041 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.041 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.041 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.041 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.042 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.042 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.042 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.042 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:21.042 15:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 037a3b85-6d7d-430e-8e0e-de23788ad285 00:07:21.302 request: 00:07:21.302 { 00:07:21.302 "uuid": "037a3b85-6d7d-430e-8e0e-de23788ad285", 00:07:21.302 "method": "bdev_lvol_get_lvstores", 00:07:21.302 "req_id": 1 00:07:21.302 } 00:07:21.302 Got JSON-RPC error response 00:07:21.302 response: 00:07:21.302 { 00:07:21.302 "code": -19, 00:07:21.302 "message": "No such device" 00:07:21.302 } 00:07:21.302 15:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:21.302 15:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:21.302 15:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:21.302 15:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:21.302 15:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:21.302 aio_bdev 00:07:21.302 15:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 61e0de58-162f-4cf2-b0a9-48e1879e71ed 00:07:21.302 15:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=61e0de58-162f-4cf2-b0a9-48e1879e71ed 00:07:21.302 15:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:21.302 15:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:21.302 15:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:21.302 15:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:21.302 15:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:21.563 15:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 61e0de58-162f-4cf2-b0a9-48e1879e71ed -t 2000 00:07:21.830 [ 00:07:21.830 { 00:07:21.830 "name": "61e0de58-162f-4cf2-b0a9-48e1879e71ed", 00:07:21.830 "aliases": [ 00:07:21.830 "lvs/lvol" 00:07:21.830 ], 00:07:21.830 "product_name": "Logical Volume", 00:07:21.830 "block_size": 4096, 00:07:21.830 "num_blocks": 38912, 00:07:21.830 "uuid": "61e0de58-162f-4cf2-b0a9-48e1879e71ed", 00:07:21.830 "assigned_rate_limits": { 00:07:21.830 "rw_ios_per_sec": 0, 00:07:21.830 "rw_mbytes_per_sec": 0, 00:07:21.830 "r_mbytes_per_sec": 0, 00:07:21.830 "w_mbytes_per_sec": 0 00:07:21.830 }, 00:07:21.830 "claimed": false, 00:07:21.830 "zoned": false, 00:07:21.830 "supported_io_types": { 00:07:21.830 "read": true, 00:07:21.830 "write": true, 00:07:21.830 "unmap": true, 00:07:21.830 "flush": false, 00:07:21.830 "reset": true, 00:07:21.830 "nvme_admin": false, 00:07:21.830 "nvme_io": false, 00:07:21.830 "nvme_io_md": false, 00:07:21.830 "write_zeroes": true, 00:07:21.830 "zcopy": false, 00:07:21.830 "get_zone_info": false, 00:07:21.830 "zone_management": false, 00:07:21.830 "zone_append": false, 00:07:21.830 "compare": false, 00:07:21.830 "compare_and_write": false, 00:07:21.830 "abort": false, 00:07:21.830 "seek_hole": true, 00:07:21.830 "seek_data": true, 00:07:21.830 "copy": false, 00:07:21.830 "nvme_iov_md": false 00:07:21.830 }, 00:07:21.830 "driver_specific": { 00:07:21.830 "lvol": { 00:07:21.830 "lvol_store_uuid": "037a3b85-6d7d-430e-8e0e-de23788ad285", 00:07:21.830 "base_bdev": "aio_bdev", 00:07:21.830 "thin_provision": false, 00:07:21.830 "num_allocated_clusters": 38, 00:07:21.830 "snapshot": false, 00:07:21.830 "clone": false, 00:07:21.830 "esnap_clone": false 00:07:21.830 } 00:07:21.830 } 00:07:21.830 } 00:07:21.830 ] 00:07:21.830 15:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:21.830 15:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 037a3b85-6d7d-430e-8e0e-de23788ad285 00:07:21.830 15:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:21.831 15:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:21.831 15:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:21.831 15:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 037a3b85-6d7d-430e-8e0e-de23788ad285 00:07:22.091 15:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:22.091 15:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 61e0de58-162f-4cf2-b0a9-48e1879e71ed 00:07:22.352 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 037a3b85-6d7d-430e-8e0e-de23788ad285 00:07:22.352 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:22.613 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:22.613 00:07:22.613 real 0m15.965s 00:07:22.613 user 0m15.622s 00:07:22.613 sys 0m1.446s 00:07:22.613 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:22.613 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:22.613 ************************************ 00:07:22.613 END TEST lvs_grow_clean 00:07:22.613 ************************************ 00:07:22.613 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:22.613 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:22.613 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:22.613 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:22.613 ************************************ 00:07:22.613 START TEST lvs_grow_dirty 00:07:22.613 ************************************ 00:07:22.613 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:07:22.613 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:22.613 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:22.613 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:22.613 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:22.613 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:22.613 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:22.613 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:22.613 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:22.613 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:22.940 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:22.940 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:23.246 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a64efda9-0771-44bc-b7cd-440cb2defc33 00:07:23.246 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a64efda9-0771-44bc-b7cd-440cb2defc33 00:07:23.246 15:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:23.246 15:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:23.246 15:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:23.246 15:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a64efda9-0771-44bc-b7cd-440cb2defc33 lvol 150 00:07:23.506 15:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e54610be-00bf-45e9-9706-40b778155f10 00:07:23.506 15:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:23.506 15:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:23.506 [2024-11-06 15:19:41.481109] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:23.506 [2024-11-06 15:19:41.481148] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:23.506 true 00:07:23.767 15:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a64efda9-0771-44bc-b7cd-440cb2defc33 00:07:23.767 15:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:23.767 15:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:23.767 15:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:24.028 15:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e54610be-00bf-45e9-9706-40b778155f10 00:07:24.028 15:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:24.288 [2024-11-06 15:19:42.138994] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:24.288 15:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:24.549 15:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:24.549 15:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3593886 00:07:24.549 15:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:24.549 15:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3593886 /var/tmp/bdevperf.sock 00:07:24.549 15:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3593886 ']' 00:07:24.549 15:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:24.549 15:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:24.549 15:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:24.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:24.549 15:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:24.549 15:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:24.549 [2024-11-06 15:19:42.337561] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:07:24.549 [2024-11-06 15:19:42.337612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3593886 ] 00:07:24.549 [2024-11-06 15:19:42.422039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.549 [2024-11-06 15:19:42.451967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.810 15:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:24.810 15:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:24.810 15:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:25.070 Nvme0n1 00:07:25.070 15:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:25.070 [ 00:07:25.070 { 00:07:25.070 "name": "Nvme0n1", 00:07:25.070 "aliases": [ 00:07:25.070 "e54610be-00bf-45e9-9706-40b778155f10" 00:07:25.070 ], 00:07:25.070 "product_name": "NVMe disk", 00:07:25.070 "block_size": 4096, 00:07:25.070 "num_blocks": 38912, 00:07:25.070 "uuid": "e54610be-00bf-45e9-9706-40b778155f10", 00:07:25.070 "numa_id": 0, 00:07:25.070 "assigned_rate_limits": { 00:07:25.070 "rw_ios_per_sec": 0, 00:07:25.070 "rw_mbytes_per_sec": 0, 00:07:25.070 "r_mbytes_per_sec": 0, 00:07:25.070 "w_mbytes_per_sec": 0 00:07:25.070 }, 00:07:25.070 "claimed": false, 00:07:25.070 "zoned": false, 00:07:25.070 "supported_io_types": { 00:07:25.070 "read": true, 00:07:25.070 "write": true, 00:07:25.070 "unmap": true, 00:07:25.070 "flush": true, 00:07:25.070 "reset": true, 00:07:25.070 "nvme_admin": true, 00:07:25.070 "nvme_io": true, 00:07:25.070 "nvme_io_md": false, 00:07:25.070 "write_zeroes": true, 00:07:25.070 "zcopy": false, 00:07:25.070 "get_zone_info": false, 00:07:25.070 "zone_management": false, 00:07:25.071 "zone_append": false, 00:07:25.071 "compare": true, 00:07:25.071 "compare_and_write": true, 00:07:25.071 "abort": true, 00:07:25.071 "seek_hole": false, 00:07:25.071 "seek_data": false, 00:07:25.071 "copy": true, 00:07:25.071 "nvme_iov_md": false 00:07:25.071 }, 00:07:25.071 "memory_domains": [ 00:07:25.071 { 00:07:25.071 "dma_device_id": "system", 00:07:25.071 "dma_device_type": 1 00:07:25.071 } 00:07:25.071 ], 00:07:25.071 "driver_specific": { 00:07:25.071 "nvme": [ 00:07:25.071 { 00:07:25.071 "trid": { 00:07:25.071 "trtype": "TCP", 00:07:25.071 "adrfam": "IPv4", 00:07:25.071 "traddr": "10.0.0.2", 00:07:25.071 "trsvcid": "4420", 00:07:25.071 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:25.071 }, 00:07:25.071 "ctrlr_data": { 00:07:25.071 "cntlid": 1, 00:07:25.071 "vendor_id": "0x8086", 00:07:25.071 "model_number": "SPDK bdev Controller", 00:07:25.071 "serial_number": "SPDK0", 00:07:25.071 "firmware_revision": "25.01", 00:07:25.071 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:25.071 "oacs": { 00:07:25.071 "security": 0, 00:07:25.071 "format": 0, 00:07:25.071 "firmware": 0, 00:07:25.071 "ns_manage": 0 00:07:25.071 }, 00:07:25.071 "multi_ctrlr": true, 00:07:25.071 "ana_reporting": false 00:07:25.071 }, 00:07:25.071 "vs": { 00:07:25.071 "nvme_version": "1.3" 00:07:25.071 }, 00:07:25.071 "ns_data": { 00:07:25.071 "id": 1, 00:07:25.071 "can_share": true 00:07:25.071 } 00:07:25.071 } 00:07:25.071 ], 00:07:25.071 "mp_policy": "active_passive" 00:07:25.071 } 00:07:25.071 } 00:07:25.071 ] 00:07:25.071 15:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3594022 00:07:25.071 15:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:25.071 15:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:25.331 Running I/O for 10 seconds... 00:07:26.274 Latency(us) 00:07:26.274 [2024-11-06T14:19:44.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.274 Nvme0n1 : 1.00 25047.00 97.84 0.00 0.00 0.00 0.00 0.00 00:07:26.274 [2024-11-06T14:19:44.257Z] =================================================================================================================== 00:07:26.274 [2024-11-06T14:19:44.257Z] Total : 25047.00 97.84 0.00 0.00 0.00 0.00 0.00 00:07:26.274 00:07:27.215 15:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a64efda9-0771-44bc-b7cd-440cb2defc33 00:07:27.215 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.215 Nvme0n1 : 2.00 25195.00 98.42 0.00 0.00 0.00 0.00 0.00 00:07:27.215 [2024-11-06T14:19:45.198Z] =================================================================================================================== 00:07:27.215 [2024-11-06T14:19:45.198Z] Total : 25195.00 98.42 0.00 0.00 0.00 0.00 0.00 00:07:27.215 00:07:27.215 true 00:07:27.215 15:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a64efda9-0771-44bc-b7cd-440cb2defc33 00:07:27.215 15:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:27.476 15:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:27.476 15:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:27.476 15:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3594022 00:07:28.417 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.417 Nvme0n1 : 3.00 25222.00 98.52 0.00 0.00 0.00 0.00 0.00 00:07:28.417 [2024-11-06T14:19:46.400Z] =================================================================================================================== 00:07:28.417 [2024-11-06T14:19:46.400Z] Total : 25222.00 98.52 0.00 0.00 0.00 0.00 0.00 00:07:28.417 00:07:29.358 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.359 Nvme0n1 : 4.00 25268.75 98.71 0.00 0.00 0.00 0.00 0.00 00:07:29.359 [2024-11-06T14:19:47.342Z] =================================================================================================================== 00:07:29.359 [2024-11-06T14:19:47.342Z] Total : 25268.75 98.71 0.00 0.00 0.00 0.00 0.00 00:07:29.359 00:07:30.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.304 Nvme0n1 : 5.00 25296.20 98.81 0.00 0.00 0.00 0.00 0.00 00:07:30.304 [2024-11-06T14:19:48.287Z] =================================================================================================================== 00:07:30.304 [2024-11-06T14:19:48.287Z] Total : 25296.20 98.81 0.00 0.00 0.00 0.00 0.00 00:07:30.304 00:07:31.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.248 Nvme0n1 : 6.00 25325.17 98.93 0.00 0.00 0.00 0.00 0.00 00:07:31.248 [2024-11-06T14:19:49.231Z] =================================================================================================================== 00:07:31.248 [2024-11-06T14:19:49.231Z] Total : 25325.17 98.93 0.00 0.00 0.00 0.00 0.00 00:07:31.248 00:07:32.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.222 Nvme0n1 : 7.00 25336.14 98.97 0.00 0.00 0.00 0.00 0.00 00:07:32.222 [2024-11-06T14:19:50.205Z] =================================================================================================================== 00:07:32.222 [2024-11-06T14:19:50.205Z] Total : 25336.14 98.97 0.00 0.00 0.00 0.00 0.00 00:07:32.222 00:07:33.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.163 Nvme0n1 : 8.00 25344.88 99.00 0.00 0.00 0.00 0.00 0.00 00:07:33.163 [2024-11-06T14:19:51.146Z] =================================================================================================================== 00:07:33.163 [2024-11-06T14:19:51.146Z] Total : 25344.88 99.00 0.00 0.00 0.00 0.00 0.00 00:07:33.163 00:07:34.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.547 Nvme0n1 : 9.00 25358.78 99.06 0.00 0.00 0.00 0.00 0.00 00:07:34.547 [2024-11-06T14:19:52.530Z] =================================================================================================================== 00:07:34.547 [2024-11-06T14:19:52.530Z] Total : 25358.78 99.06 0.00 0.00 0.00 0.00 0.00 00:07:34.547 00:07:35.488 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.488 Nvme0n1 : 10.00 25370.10 99.10 0.00 0.00 0.00 0.00 0.00 00:07:35.488 [2024-11-06T14:19:53.471Z] =================================================================================================================== 00:07:35.488 [2024-11-06T14:19:53.471Z] Total : 25370.10 99.10 0.00 0.00 0.00 0.00 0.00 00:07:35.488 00:07:35.488 00:07:35.488 Latency(us) 00:07:35.488 [2024-11-06T14:19:53.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.488 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.488 Nvme0n1 : 10.00 25370.58 99.10 0.00 0.00 5042.03 1570.13 9284.27 00:07:35.488 [2024-11-06T14:19:53.471Z] =================================================================================================================== 00:07:35.488 [2024-11-06T14:19:53.471Z] Total : 25370.58 99.10 0.00 0.00 5042.03 1570.13 9284.27 00:07:35.488 { 00:07:35.488 "results": [ 00:07:35.488 { 00:07:35.488 "job": "Nvme0n1", 00:07:35.488 "core_mask": "0x2", 00:07:35.488 "workload": "randwrite", 00:07:35.488 "status": "finished", 00:07:35.488 "queue_depth": 128, 00:07:35.488 "io_size": 4096, 00:07:35.488 "runtime": 10.004857, 00:07:35.488 "iops": 25370.57751050315, 00:07:35.488 "mibps": 99.10381840040293, 00:07:35.488 "io_failed": 0, 00:07:35.488 "io_timeout": 0, 00:07:35.488 "avg_latency_us": 5042.031712373291, 00:07:35.488 "min_latency_us": 1570.1333333333334, 00:07:35.488 "max_latency_us": 9284.266666666666 00:07:35.488 } 00:07:35.488 ], 00:07:35.488 "core_count": 1 00:07:35.488 } 00:07:35.488 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3593886 00:07:35.488 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 3593886 ']' 00:07:35.488 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 3593886 00:07:35.488 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:07:35.488 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:35.488 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3593886 00:07:35.488 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:35.488 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:35.488 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3593886' 00:07:35.488 killing process with pid 3593886 00:07:35.488 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 3593886 00:07:35.488 Received shutdown signal, test time was about 10.000000 seconds 00:07:35.488 00:07:35.488 Latency(us) 00:07:35.488 [2024-11-06T14:19:53.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.488 [2024-11-06T14:19:53.471Z] =================================================================================================================== 00:07:35.488 [2024-11-06T14:19:53.471Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:35.488 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 3593886 00:07:35.488 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:35.488 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:35.749 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a64efda9-0771-44bc-b7cd-440cb2defc33 00:07:35.749 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:36.010 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:36.010 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:36.010 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3590011 00:07:36.010 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3590011 00:07:36.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3590011 Killed "${NVMF_APP[@]}" "$@" 00:07:36.010 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:36.010 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:36.010 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:36.010 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:36.010 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:36.010 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3596073 00:07:36.010 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3596073 00:07:36.010 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:36.010 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3596073 ']' 00:07:36.010 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.010 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:36.010 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.010 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:36.010 15:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:36.010 [2024-11-06 15:19:53.927540] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:07:36.010 [2024-11-06 15:19:53.927601] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.271 [2024-11-06 15:19:54.021754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.271 [2024-11-06 15:19:54.052628] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.271 [2024-11-06 15:19:54.052658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.271 [2024-11-06 15:19:54.052664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.271 [2024-11-06 15:19:54.052668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.271 [2024-11-06 15:19:54.052673] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.271 [2024-11-06 15:19:54.053163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.842 15:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:36.842 15:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:36.842 15:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:36.842 15:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:36.842 15:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:36.842 15:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.842 15:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:37.102 [2024-11-06 15:19:54.904889] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:37.102 [2024-11-06 15:19:54.904961] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:37.102 [2024-11-06 15:19:54.904982] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:37.102 15:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:37.102 15:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e54610be-00bf-45e9-9706-40b778155f10 00:07:37.102 15:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=e54610be-00bf-45e9-9706-40b778155f10 00:07:37.102 15:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:37.102 15:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:37.102 15:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:37.102 15:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:37.102 15:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:37.362 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e54610be-00bf-45e9-9706-40b778155f10 -t 2000 00:07:37.362 [ 00:07:37.362 { 00:07:37.362 "name": "e54610be-00bf-45e9-9706-40b778155f10", 00:07:37.362 "aliases": [ 00:07:37.362 "lvs/lvol" 00:07:37.362 ], 00:07:37.362 "product_name": "Logical Volume", 00:07:37.362 "block_size": 4096, 00:07:37.362 "num_blocks": 38912, 00:07:37.362 "uuid": "e54610be-00bf-45e9-9706-40b778155f10", 00:07:37.362 "assigned_rate_limits": { 00:07:37.362 "rw_ios_per_sec": 0, 00:07:37.362 "rw_mbytes_per_sec": 0, 00:07:37.362 "r_mbytes_per_sec": 0, 00:07:37.362 "w_mbytes_per_sec": 0 00:07:37.362 }, 00:07:37.362 "claimed": false, 00:07:37.362 "zoned": false, 00:07:37.362 "supported_io_types": { 00:07:37.362 "read": true, 00:07:37.362 "write": true, 00:07:37.362 "unmap": true, 00:07:37.362 "flush": false, 00:07:37.362 "reset": true, 00:07:37.362 "nvme_admin": false, 00:07:37.362 "nvme_io": false, 00:07:37.362 "nvme_io_md": false, 00:07:37.362 "write_zeroes": true, 00:07:37.362 "zcopy": false, 00:07:37.362 "get_zone_info": false, 00:07:37.362 "zone_management": false, 00:07:37.362 "zone_append": false, 00:07:37.362 "compare": false, 00:07:37.362 "compare_and_write": false, 00:07:37.362 "abort": false, 00:07:37.362 "seek_hole": true, 00:07:37.362 "seek_data": true, 00:07:37.362 "copy": false, 00:07:37.362 "nvme_iov_md": false 00:07:37.362 }, 00:07:37.362 "driver_specific": { 00:07:37.362 "lvol": { 00:07:37.362 "lvol_store_uuid": "a64efda9-0771-44bc-b7cd-440cb2defc33", 00:07:37.362 "base_bdev": "aio_bdev", 00:07:37.362 "thin_provision": false, 00:07:37.362 "num_allocated_clusters": 38, 00:07:37.362 "snapshot": false, 00:07:37.362 "clone": false, 00:07:37.362 "esnap_clone": false 00:07:37.362 } 00:07:37.362 } 00:07:37.362 } 00:07:37.362 ] 00:07:37.362 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:37.362 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a64efda9-0771-44bc-b7cd-440cb2defc33 00:07:37.362 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:37.624 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:37.624 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a64efda9-0771-44bc-b7cd-440cb2defc33 00:07:37.624 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:37.624 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:37.624 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:37.885 [2024-11-06 15:19:55.749519] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:37.885 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a64efda9-0771-44bc-b7cd-440cb2defc33 00:07:37.885 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:37.885 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a64efda9-0771-44bc-b7cd-440cb2defc33 00:07:37.885 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.885 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.885 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.885 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.885 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.885 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.885 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.885 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:37.885 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a64efda9-0771-44bc-b7cd-440cb2defc33 00:07:38.146 request: 00:07:38.146 { 00:07:38.146 "uuid": "a64efda9-0771-44bc-b7cd-440cb2defc33", 00:07:38.146 "method": "bdev_lvol_get_lvstores", 00:07:38.146 "req_id": 1 00:07:38.146 } 00:07:38.146 Got JSON-RPC error response 00:07:38.146 response: 00:07:38.146 { 00:07:38.146 "code": -19, 00:07:38.146 "message": "No such device" 00:07:38.146 } 00:07:38.146 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:38.146 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:38.146 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:38.146 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:38.146 15:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:38.407 aio_bdev 00:07:38.407 15:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e54610be-00bf-45e9-9706-40b778155f10 00:07:38.407 15:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=e54610be-00bf-45e9-9706-40b778155f10 00:07:38.407 15:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:38.407 15:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:38.407 15:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:38.407 15:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:38.407 15:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:38.407 15:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e54610be-00bf-45e9-9706-40b778155f10 -t 2000 00:07:38.667 [ 00:07:38.667 { 00:07:38.667 "name": "e54610be-00bf-45e9-9706-40b778155f10", 00:07:38.667 "aliases": [ 00:07:38.667 "lvs/lvol" 00:07:38.667 ], 00:07:38.668 "product_name": "Logical Volume", 00:07:38.668 "block_size": 4096, 00:07:38.668 "num_blocks": 38912, 00:07:38.668 "uuid": "e54610be-00bf-45e9-9706-40b778155f10", 00:07:38.668 "assigned_rate_limits": { 00:07:38.668 "rw_ios_per_sec": 0, 00:07:38.668 "rw_mbytes_per_sec": 0, 00:07:38.668 "r_mbytes_per_sec": 0, 00:07:38.668 "w_mbytes_per_sec": 0 00:07:38.668 }, 00:07:38.668 "claimed": false, 00:07:38.668 "zoned": false, 00:07:38.668 "supported_io_types": { 00:07:38.668 "read": true, 00:07:38.668 "write": true, 00:07:38.668 "unmap": true, 00:07:38.668 "flush": false, 00:07:38.668 "reset": true, 00:07:38.668 "nvme_admin": false, 00:07:38.668 "nvme_io": false, 00:07:38.668 "nvme_io_md": false, 00:07:38.668 "write_zeroes": true, 00:07:38.668 "zcopy": false, 00:07:38.668 "get_zone_info": false, 00:07:38.668 "zone_management": false, 00:07:38.668 "zone_append": false, 00:07:38.668 "compare": false, 00:07:38.668 "compare_and_write": false, 00:07:38.668 "abort": false, 00:07:38.668 "seek_hole": true, 00:07:38.668 "seek_data": true, 00:07:38.668 "copy": false, 00:07:38.668 "nvme_iov_md": false 00:07:38.668 }, 00:07:38.668 "driver_specific": { 00:07:38.668 "lvol": { 00:07:38.668 "lvol_store_uuid": "a64efda9-0771-44bc-b7cd-440cb2defc33", 00:07:38.668 "base_bdev": "aio_bdev", 00:07:38.668 "thin_provision": false, 00:07:38.668 "num_allocated_clusters": 38, 00:07:38.668 "snapshot": false, 00:07:38.668 "clone": false, 00:07:38.668 "esnap_clone": false 00:07:38.668 } 00:07:38.668 } 00:07:38.668 } 00:07:38.668 ] 00:07:38.668 15:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:38.668 15:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a64efda9-0771-44bc-b7cd-440cb2defc33 00:07:38.668 15:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:38.929 15:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:38.929 15:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a64efda9-0771-44bc-b7cd-440cb2defc33 00:07:38.929 15:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:38.929 15:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:38.929 15:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e54610be-00bf-45e9-9706-40b778155f10 00:07:39.189 15:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a64efda9-0771-44bc-b7cd-440cb2defc33 00:07:39.449 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:39.449 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:39.449 00:07:39.449 real 0m16.812s 00:07:39.449 user 0m44.351s 00:07:39.449 sys 0m2.986s 00:07:39.449 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:39.449 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:39.449 ************************************ 00:07:39.449 END TEST lvs_grow_dirty 00:07:39.449 ************************************ 00:07:39.449 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:39.449 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:07:39.449 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:07:39.449 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:07:39.449 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:39.710 nvmf_trace.0 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:39.710 rmmod nvme_tcp 00:07:39.710 rmmod nvme_fabrics 00:07:39.710 rmmod nvme_keyring 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3596073 ']' 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3596073 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 3596073 ']' 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 3596073 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3596073 00:07:39.710 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:39.711 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:39.711 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3596073' 00:07:39.711 killing process with pid 3596073 00:07:39.711 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 3596073 00:07:39.711 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 3596073 00:07:39.971 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:39.971 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:39.971 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:39.971 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:39.971 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:39.971 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:39.971 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:39.971 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:39.971 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:39.971 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.971 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.971 15:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.882 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:41.882 00:07:41.882 real 0m44.343s 00:07:41.882 user 1m6.339s 00:07:41.882 sys 0m10.710s 00:07:41.882 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:41.882 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:41.882 ************************************ 00:07:41.882 END TEST nvmf_lvs_grow 00:07:41.882 ************************************ 00:07:41.882 15:19:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:41.882 15:19:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:41.882 15:19:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:41.882 15:19:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:42.143 ************************************ 00:07:42.143 START TEST nvmf_bdev_io_wait 00:07:42.143 ************************************ 00:07:42.143 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:42.143 * Looking for test storage... 00:07:42.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.143 15:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:42.143 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:42.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.144 --rc genhtml_branch_coverage=1 00:07:42.144 --rc genhtml_function_coverage=1 00:07:42.144 --rc genhtml_legend=1 00:07:42.144 --rc geninfo_all_blocks=1 00:07:42.144 --rc geninfo_unexecuted_blocks=1 00:07:42.144 00:07:42.144 ' 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:42.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.144 --rc genhtml_branch_coverage=1 00:07:42.144 --rc genhtml_function_coverage=1 00:07:42.144 --rc genhtml_legend=1 00:07:42.144 --rc geninfo_all_blocks=1 00:07:42.144 --rc geninfo_unexecuted_blocks=1 00:07:42.144 00:07:42.144 ' 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:42.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.144 --rc genhtml_branch_coverage=1 00:07:42.144 --rc genhtml_function_coverage=1 00:07:42.144 --rc genhtml_legend=1 00:07:42.144 --rc geninfo_all_blocks=1 00:07:42.144 --rc geninfo_unexecuted_blocks=1 00:07:42.144 00:07:42.144 ' 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:42.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.144 --rc genhtml_branch_coverage=1 00:07:42.144 --rc genhtml_function_coverage=1 00:07:42.144 --rc genhtml_legend=1 00:07:42.144 --rc geninfo_all_blocks=1 00:07:42.144 --rc geninfo_unexecuted_blocks=1 00:07:42.144 00:07:42.144 ' 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.144 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.404 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.404 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.404 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.404 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.404 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.404 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.404 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.404 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:42.404 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:42.404 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:42.404 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:42.404 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.404 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:42.405 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:42.405 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:42.405 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.405 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.405 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.405 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:42.405 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:42.405 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:42.405 15:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:50.547 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:50.547 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.547 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:50.548 Found net devices under 0000:31:00.0: cvl_0_0 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:50.548 Found net devices under 0000:31:00.1: cvl_0_1 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:50.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:07:50.548 00:07:50.548 --- 10.0.0.2 ping statistics --- 00:07:50.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.548 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:07:50.548 00:07:50.548 --- 10.0.0.1 ping statistics --- 00:07:50.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.548 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3601189 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3601189 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 3601189 ']' 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:50.548 15:20:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.548 [2024-11-06 15:20:07.731760] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:07:50.548 [2024-11-06 15:20:07.731825] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.548 [2024-11-06 15:20:07.832182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.548 [2024-11-06 15:20:07.886580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.548 [2024-11-06 15:20:07.886633] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.548 [2024-11-06 15:20:07.886642] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.548 [2024-11-06 15:20:07.886649] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.548 [2024-11-06 15:20:07.886659] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.548 [2024-11-06 15:20:07.889103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.548 [2024-11-06 15:20:07.889263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.548 [2024-11-06 15:20:07.889425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.548 [2024-11-06 15:20:07.889425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.810 [2024-11-06 15:20:08.680527] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.810 Malloc0 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.810 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.811 [2024-11-06 15:20:08.746438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3601525 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3601527 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:50.811 { 00:07:50.811 "params": { 00:07:50.811 "name": "Nvme$subsystem", 00:07:50.811 "trtype": "$TEST_TRANSPORT", 00:07:50.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.811 "adrfam": "ipv4", 00:07:50.811 "trsvcid": "$NVMF_PORT", 00:07:50.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.811 "hdgst": ${hdgst:-false}, 00:07:50.811 "ddgst": ${ddgst:-false} 00:07:50.811 }, 00:07:50.811 "method": "bdev_nvme_attach_controller" 00:07:50.811 } 00:07:50.811 EOF 00:07:50.811 )") 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3601529 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:50.811 { 00:07:50.811 "params": { 00:07:50.811 "name": "Nvme$subsystem", 00:07:50.811 "trtype": "$TEST_TRANSPORT", 00:07:50.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.811 "adrfam": "ipv4", 00:07:50.811 "trsvcid": "$NVMF_PORT", 00:07:50.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.811 "hdgst": ${hdgst:-false}, 00:07:50.811 "ddgst": ${ddgst:-false} 00:07:50.811 }, 00:07:50.811 "method": "bdev_nvme_attach_controller" 00:07:50.811 } 00:07:50.811 EOF 00:07:50.811 )") 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3601532 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:50.811 { 00:07:50.811 "params": { 00:07:50.811 "name": "Nvme$subsystem", 00:07:50.811 "trtype": "$TEST_TRANSPORT", 00:07:50.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.811 "adrfam": "ipv4", 00:07:50.811 "trsvcid": "$NVMF_PORT", 00:07:50.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.811 "hdgst": ${hdgst:-false}, 00:07:50.811 "ddgst": ${ddgst:-false} 00:07:50.811 }, 00:07:50.811 "method": "bdev_nvme_attach_controller" 00:07:50.811 } 00:07:50.811 EOF 00:07:50.811 )") 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:50.811 { 00:07:50.811 "params": { 00:07:50.811 "name": "Nvme$subsystem", 00:07:50.811 "trtype": "$TEST_TRANSPORT", 00:07:50.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.811 "adrfam": "ipv4", 00:07:50.811 "trsvcid": "$NVMF_PORT", 00:07:50.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.811 "hdgst": ${hdgst:-false}, 00:07:50.811 "ddgst": ${ddgst:-false} 00:07:50.811 }, 00:07:50.811 "method": "bdev_nvme_attach_controller" 00:07:50.811 } 00:07:50.811 EOF 00:07:50.811 )") 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3601525 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:50.811 "params": { 00:07:50.811 "name": "Nvme1", 00:07:50.811 "trtype": "tcp", 00:07:50.811 "traddr": "10.0.0.2", 00:07:50.811 "adrfam": "ipv4", 00:07:50.811 "trsvcid": "4420", 00:07:50.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:50.811 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:50.811 "hdgst": false, 00:07:50.811 "ddgst": false 00:07:50.811 }, 00:07:50.811 "method": "bdev_nvme_attach_controller" 00:07:50.811 }' 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:50.811 "params": { 00:07:50.811 "name": "Nvme1", 00:07:50.811 "trtype": "tcp", 00:07:50.811 "traddr": "10.0.0.2", 00:07:50.811 "adrfam": "ipv4", 00:07:50.811 "trsvcid": "4420", 00:07:50.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:50.811 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:50.811 "hdgst": false, 00:07:50.811 "ddgst": false 00:07:50.811 }, 00:07:50.811 "method": "bdev_nvme_attach_controller" 00:07:50.811 }' 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:50.811 "params": { 00:07:50.811 "name": "Nvme1", 00:07:50.811 "trtype": "tcp", 00:07:50.811 "traddr": "10.0.0.2", 00:07:50.811 "adrfam": "ipv4", 00:07:50.811 "trsvcid": "4420", 00:07:50.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:50.811 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:50.811 "hdgst": false, 00:07:50.811 "ddgst": false 00:07:50.811 }, 00:07:50.811 "method": "bdev_nvme_attach_controller" 00:07:50.811 }' 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:50.811 15:20:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:50.811 "params": { 00:07:50.811 "name": "Nvme1", 00:07:50.811 "trtype": "tcp", 00:07:50.811 "traddr": "10.0.0.2", 00:07:50.811 "adrfam": "ipv4", 00:07:50.811 "trsvcid": "4420", 00:07:50.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:50.811 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:50.811 "hdgst": false, 00:07:50.811 "ddgst": false 00:07:50.811 }, 00:07:50.811 "method": "bdev_nvme_attach_controller" 00:07:50.811 }' 00:07:51.073 [2024-11-06 15:20:08.806854] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:07:51.073 [2024-11-06 15:20:08.806918] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:51.073 [2024-11-06 15:20:08.809487] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:07:51.073 [2024-11-06 15:20:08.809546] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:51.073 [2024-11-06 15:20:08.813932] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:07:51.073 [2024-11-06 15:20:08.814013] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:51.073 [2024-11-06 15:20:08.817560] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:07:51.073 [2024-11-06 15:20:08.817647] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:51.073 [2024-11-06 15:20:09.000780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.073 [2024-11-06 15:20:09.038687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:51.334 [2024-11-06 15:20:09.065726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.334 [2024-11-06 15:20:09.104143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:51.334 [2024-11-06 15:20:09.154608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.334 [2024-11-06 15:20:09.193730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:51.334 [2024-11-06 15:20:09.220449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.334 [2024-11-06 15:20:09.256100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:51.595 Running I/O for 1 seconds... 00:07:51.595 Running I/O for 1 seconds... 00:07:51.595 Running I/O for 1 seconds... 00:07:51.595 Running I/O for 1 seconds... 00:07:52.548 12375.00 IOPS, 48.34 MiB/s 00:07:52.548 Latency(us) 00:07:52.548 [2024-11-06T14:20:10.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.548 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:52.548 Nvme1n1 : 1.01 12432.09 48.56 0.00 0.00 10260.64 5461.33 18350.08 00:07:52.548 [2024-11-06T14:20:10.531Z] =================================================================================================================== 00:07:52.548 [2024-11-06T14:20:10.531Z] Total : 12432.09 48.56 0.00 0.00 10260.64 5461.33 18350.08 00:07:52.548 6089.00 IOPS, 23.79 MiB/s 00:07:52.548 Latency(us) 00:07:52.548 [2024-11-06T14:20:10.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.548 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:52.548 Nvme1n1 : 1.02 6115.12 23.89 0.00 0.00 20771.12 7536.64 34734.08 00:07:52.548 [2024-11-06T14:20:10.531Z] =================================================================================================================== 00:07:52.548 [2024-11-06T14:20:10.531Z] Total : 6115.12 23.89 0.00 0.00 20771.12 7536.64 34734.08 00:07:52.548 184152.00 IOPS, 719.34 MiB/s 00:07:52.548 Latency(us) 00:07:52.548 [2024-11-06T14:20:10.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.548 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:52.548 Nvme1n1 : 1.00 183780.49 717.89 0.00 0.00 692.79 324.27 2007.04 00:07:52.548 [2024-11-06T14:20:10.531Z] =================================================================================================================== 00:07:52.548 [2024-11-06T14:20:10.531Z] Total : 183780.49 717.89 0.00 0.00 692.79 324.27 2007.04 00:07:52.548 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3601527 00:07:52.548 6022.00 IOPS, 23.52 MiB/s 00:07:52.548 Latency(us) 00:07:52.548 [2024-11-06T14:20:10.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.548 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:52.548 Nvme1n1 : 1.01 6111.13 23.87 0.00 0.00 20865.82 5843.63 46093.65 00:07:52.548 [2024-11-06T14:20:10.531Z] =================================================================================================================== 00:07:52.548 [2024-11-06T14:20:10.531Z] Total : 6111.13 23.87 0.00 0.00 20865.82 5843.63 46093.65 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3601529 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3601532 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:52.809 rmmod nvme_tcp 00:07:52.809 rmmod nvme_fabrics 00:07:52.809 rmmod nvme_keyring 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3601189 ']' 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3601189 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 3601189 ']' 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 3601189 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3601189 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3601189' 00:07:52.809 killing process with pid 3601189 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 3601189 00:07:52.809 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 3601189 00:07:53.070 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:53.070 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:53.070 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:53.070 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:53.070 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:53.070 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:53.070 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:53.070 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:53.070 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:53.070 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.070 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.070 15:20:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.982 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:54.982 00:07:54.982 real 0m13.037s 00:07:54.982 user 0m19.367s 00:07:54.982 sys 0m7.364s 00:07:54.982 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:54.982 15:20:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:54.982 ************************************ 00:07:54.982 END TEST nvmf_bdev_io_wait 00:07:54.982 ************************************ 00:07:55.243 15:20:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:55.243 15:20:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:55.243 15:20:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:55.243 15:20:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:55.243 ************************************ 00:07:55.243 START TEST nvmf_queue_depth 00:07:55.243 ************************************ 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:55.243 * Looking for test storage... 00:07:55.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.243 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:55.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.505 --rc genhtml_branch_coverage=1 00:07:55.505 --rc genhtml_function_coverage=1 00:07:55.505 --rc genhtml_legend=1 00:07:55.505 --rc geninfo_all_blocks=1 00:07:55.505 --rc geninfo_unexecuted_blocks=1 00:07:55.505 00:07:55.505 ' 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:55.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.505 --rc genhtml_branch_coverage=1 00:07:55.505 --rc genhtml_function_coverage=1 00:07:55.505 --rc genhtml_legend=1 00:07:55.505 --rc geninfo_all_blocks=1 00:07:55.505 --rc geninfo_unexecuted_blocks=1 00:07:55.505 00:07:55.505 ' 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:55.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.505 --rc genhtml_branch_coverage=1 00:07:55.505 --rc genhtml_function_coverage=1 00:07:55.505 --rc genhtml_legend=1 00:07:55.505 --rc geninfo_all_blocks=1 00:07:55.505 --rc geninfo_unexecuted_blocks=1 00:07:55.505 00:07:55.505 ' 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:55.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.505 --rc genhtml_branch_coverage=1 00:07:55.505 --rc genhtml_function_coverage=1 00:07:55.505 --rc genhtml_legend=1 00:07:55.505 --rc geninfo_all_blocks=1 00:07:55.505 --rc geninfo_unexecuted_blocks=1 00:07:55.505 00:07:55.505 ' 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:55.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:55.505 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.506 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.506 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.506 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:55.506 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:55.506 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:55.506 15:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:03.645 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:03.645 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:03.645 Found net devices under 0000:31:00.0: cvl_0_0 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:03.645 Found net devices under 0000:31:00.1: cvl_0_1 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:03.645 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:03.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:08:03.645 00:08:03.645 --- 10.0.0.2 ping statistics --- 00:08:03.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.646 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:08:03.646 00:08:03.646 --- 10.0.0.1 ping statistics --- 00:08:03.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.646 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3606256 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3606256 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3606256 ']' 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:03.646 15:20:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.646 [2024-11-06 15:20:20.947077] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:08:03.646 [2024-11-06 15:20:20.947141] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.646 [2024-11-06 15:20:21.051380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.646 [2024-11-06 15:20:21.101710] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.646 [2024-11-06 15:20:21.101764] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.646 [2024-11-06 15:20:21.101774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.646 [2024-11-06 15:20:21.101781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.646 [2024-11-06 15:20:21.101787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.646 [2024-11-06 15:20:21.102562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.907 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:03.907 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:03.907 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:03.907 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:03.907 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.907 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.908 [2024-11-06 15:20:21.808072] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.908 Malloc0 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.908 [2024-11-06 15:20:21.869299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3606363 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3606363 /var/tmp/bdevperf.sock 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3606363 ']' 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:03.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:03.908 15:20:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:04.169 [2024-11-06 15:20:21.927413] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:08:04.169 [2024-11-06 15:20:21.927480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3606363 ] 00:08:04.169 [2024-11-06 15:20:22.018712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.169 [2024-11-06 15:20:22.072540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.112 15:20:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:05.112 15:20:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:05.112 15:20:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:05.112 15:20:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.112 15:20:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:05.112 NVMe0n1 00:08:05.113 15:20:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.113 15:20:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:05.113 Running I/O for 10 seconds... 00:08:06.996 7529.00 IOPS, 29.41 MiB/s [2024-11-06T14:20:26.360Z] 8704.00 IOPS, 34.00 MiB/s [2024-11-06T14:20:27.300Z] 9220.67 IOPS, 36.02 MiB/s [2024-11-06T14:20:28.242Z] 9926.00 IOPS, 38.77 MiB/s [2024-11-06T14:20:29.183Z] 10447.60 IOPS, 40.81 MiB/s [2024-11-06T14:20:30.125Z] 10834.17 IOPS, 42.32 MiB/s [2024-11-06T14:20:31.068Z] 11150.14 IOPS, 43.56 MiB/s [2024-11-06T14:20:32.009Z] 11393.00 IOPS, 44.50 MiB/s [2024-11-06T14:20:33.399Z] 11543.44 IOPS, 45.09 MiB/s [2024-11-06T14:20:33.399Z] 11671.90 IOPS, 45.59 MiB/s 00:08:15.416 Latency(us) 00:08:15.416 [2024-11-06T14:20:33.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.416 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:15.416 Verification LBA range: start 0x0 length 0x4000 00:08:15.416 NVMe0n1 : 10.06 11708.23 45.74 0.00 0.00 87179.74 20097.71 85196.80 00:08:15.416 [2024-11-06T14:20:33.399Z] =================================================================================================================== 00:08:15.416 [2024-11-06T14:20:33.399Z] Total : 11708.23 45.74 0.00 0.00 87179.74 20097.71 85196.80 00:08:15.416 { 00:08:15.416 "results": [ 00:08:15.416 { 00:08:15.416 "job": "NVMe0n1", 00:08:15.416 "core_mask": "0x1", 00:08:15.416 "workload": "verify", 00:08:15.416 "status": "finished", 00:08:15.416 "verify_range": { 00:08:15.416 "start": 0, 00:08:15.416 "length": 16384 00:08:15.416 }, 00:08:15.416 "queue_depth": 1024, 00:08:15.416 "io_size": 4096, 00:08:15.416 "runtime": 10.056427, 00:08:15.416 "iops": 11708.233948299929, 00:08:15.416 "mibps": 45.735288860546596, 00:08:15.416 "io_failed": 0, 00:08:15.416 "io_timeout": 0, 00:08:15.416 "avg_latency_us": 87179.73530282054, 00:08:15.416 "min_latency_us": 20097.706666666665, 00:08:15.416 "max_latency_us": 85196.8 00:08:15.416 } 00:08:15.416 ], 00:08:15.416 "core_count": 1 00:08:15.416 } 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3606363 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3606363 ']' 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3606363 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3606363 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3606363' 00:08:15.416 killing process with pid 3606363 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3606363 00:08:15.416 Received shutdown signal, test time was about 10.000000 seconds 00:08:15.416 00:08:15.416 Latency(us) 00:08:15.416 [2024-11-06T14:20:33.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.416 [2024-11-06T14:20:33.399Z] =================================================================================================================== 00:08:15.416 [2024-11-06T14:20:33.399Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3606363 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:15.416 rmmod nvme_tcp 00:08:15.416 rmmod nvme_fabrics 00:08:15.416 rmmod nvme_keyring 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3606256 ']' 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3606256 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3606256 ']' 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3606256 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3606256 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3606256' 00:08:15.416 killing process with pid 3606256 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3606256 00:08:15.416 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3606256 00:08:15.686 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:15.686 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:15.686 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:15.686 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:15.686 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:15.686 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:15.686 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:15.686 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:15.686 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:15.686 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.686 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.686 15:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.662 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:17.662 00:08:17.662 real 0m22.504s 00:08:17.662 user 0m25.623s 00:08:17.662 sys 0m7.124s 00:08:17.662 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:17.662 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.662 ************************************ 00:08:17.662 END TEST nvmf_queue_depth 00:08:17.662 ************************************ 00:08:17.662 15:20:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:17.662 15:20:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:17.662 15:20:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:17.662 15:20:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:17.662 ************************************ 00:08:17.662 START TEST nvmf_target_multipath 00:08:17.662 ************************************ 00:08:17.662 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:17.924 * Looking for test storage... 00:08:17.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:17.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.924 --rc genhtml_branch_coverage=1 00:08:17.924 --rc genhtml_function_coverage=1 00:08:17.924 --rc genhtml_legend=1 00:08:17.924 --rc geninfo_all_blocks=1 00:08:17.924 --rc geninfo_unexecuted_blocks=1 00:08:17.924 00:08:17.924 ' 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:17.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.924 --rc genhtml_branch_coverage=1 00:08:17.924 --rc genhtml_function_coverage=1 00:08:17.924 --rc genhtml_legend=1 00:08:17.924 --rc geninfo_all_blocks=1 00:08:17.924 --rc geninfo_unexecuted_blocks=1 00:08:17.924 00:08:17.924 ' 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:17.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.924 --rc genhtml_branch_coverage=1 00:08:17.924 --rc genhtml_function_coverage=1 00:08:17.924 --rc genhtml_legend=1 00:08:17.924 --rc geninfo_all_blocks=1 00:08:17.924 --rc geninfo_unexecuted_blocks=1 00:08:17.924 00:08:17.924 ' 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:17.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.924 --rc genhtml_branch_coverage=1 00:08:17.924 --rc genhtml_function_coverage=1 00:08:17.924 --rc genhtml_legend=1 00:08:17.924 --rc geninfo_all_blocks=1 00:08:17.924 --rc geninfo_unexecuted_blocks=1 00:08:17.924 00:08:17.924 ' 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.924 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:17.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:17.925 15:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:26.068 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:26.069 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:26.069 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:26.069 Found net devices under 0000:31:00.0: cvl_0_0 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:26.069 Found net devices under 0000:31:00.1: cvl_0_1 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:26.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:08:26.069 00:08:26.069 --- 10.0.0.2 ping statistics --- 00:08:26.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.069 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:26.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:08:26.069 00:08:26.069 --- 10.0.0.1 ping statistics --- 00:08:26.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.069 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:26.069 only one NIC for nvmf test 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:26.069 rmmod nvme_tcp 00:08:26.069 rmmod nvme_fabrics 00:08:26.069 rmmod nvme_keyring 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:26.069 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:26.070 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:26.070 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:26.070 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:26.070 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:26.070 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:26.070 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.070 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.070 15:20:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:27.986 00:08:27.986 real 0m10.123s 00:08:27.986 user 0m2.176s 00:08:27.986 sys 0m5.853s 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:27.986 ************************************ 00:08:27.986 END TEST nvmf_target_multipath 00:08:27.986 ************************************ 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:27.986 ************************************ 00:08:27.986 START TEST nvmf_zcopy 00:08:27.986 ************************************ 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:27.986 * Looking for test storage... 00:08:27.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:27.986 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:27.987 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:27.987 15:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:28.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.249 --rc genhtml_branch_coverage=1 00:08:28.249 --rc genhtml_function_coverage=1 00:08:28.249 --rc genhtml_legend=1 00:08:28.249 --rc geninfo_all_blocks=1 00:08:28.249 --rc geninfo_unexecuted_blocks=1 00:08:28.249 00:08:28.249 ' 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:28.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.249 --rc genhtml_branch_coverage=1 00:08:28.249 --rc genhtml_function_coverage=1 00:08:28.249 --rc genhtml_legend=1 00:08:28.249 --rc geninfo_all_blocks=1 00:08:28.249 --rc geninfo_unexecuted_blocks=1 00:08:28.249 00:08:28.249 ' 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:28.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.249 --rc genhtml_branch_coverage=1 00:08:28.249 --rc genhtml_function_coverage=1 00:08:28.249 --rc genhtml_legend=1 00:08:28.249 --rc geninfo_all_blocks=1 00:08:28.249 --rc geninfo_unexecuted_blocks=1 00:08:28.249 00:08:28.249 ' 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:28.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.249 --rc genhtml_branch_coverage=1 00:08:28.249 --rc genhtml_function_coverage=1 00:08:28.249 --rc genhtml_legend=1 00:08:28.249 --rc geninfo_all_blocks=1 00:08:28.249 --rc geninfo_unexecuted_blocks=1 00:08:28.249 00:08:28.249 ' 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:28.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.249 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.250 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.250 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:28.250 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:28.250 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:28.250 15:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.397 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:36.398 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:36.398 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:36.398 Found net devices under 0000:31:00.0: cvl_0_0 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:36.398 Found net devices under 0000:31:00.1: cvl_0_1 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:36.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:08:36.398 00:08:36.398 --- 10.0.0.2 ping statistics --- 00:08:36.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.398 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:36.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:08:36.398 00:08:36.398 --- 10.0.0.1 ping statistics --- 00:08:36.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.398 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:36.398 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.399 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:36.399 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:36.399 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.399 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:36.399 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:36.399 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:36.399 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:36.399 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:36.399 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.399 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3617363 00:08:36.399 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3617363 00:08:36.399 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:36.399 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 3617363 ']' 00:08:36.399 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.399 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:36.399 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.399 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:36.399 15:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.399 [2024-11-06 15:20:53.734707] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:08:36.399 [2024-11-06 15:20:53.734780] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.399 [2024-11-06 15:20:53.836315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.399 [2024-11-06 15:20:53.885655] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.399 [2024-11-06 15:20:53.885706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.399 [2024-11-06 15:20:53.885715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.399 [2024-11-06 15:20:53.885722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.399 [2024-11-06 15:20:53.885728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.399 [2024-11-06 15:20:53.886539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.659 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:36.659 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:08:36.659 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:36.659 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:36.659 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.660 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.660 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:36.660 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:36.660 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.660 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.660 [2024-11-06 15:20:54.617206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.660 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.660 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:36.660 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.660 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.660 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.660 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.660 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.660 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.920 [2024-11-06 15:20:54.641515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.920 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.920 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:36.920 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.920 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.920 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.920 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:36.920 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.920 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.920 malloc0 00:08:36.920 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.920 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:36.920 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.920 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.920 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.920 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:36.920 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:36.920 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:36.920 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:36.920 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:36.920 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:36.920 { 00:08:36.920 "params": { 00:08:36.920 "name": "Nvme$subsystem", 00:08:36.920 "trtype": "$TEST_TRANSPORT", 00:08:36.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:36.920 "adrfam": "ipv4", 00:08:36.920 "trsvcid": "$NVMF_PORT", 00:08:36.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:36.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:36.920 "hdgst": ${hdgst:-false}, 00:08:36.920 "ddgst": ${ddgst:-false} 00:08:36.920 }, 00:08:36.920 "method": "bdev_nvme_attach_controller" 00:08:36.920 } 00:08:36.920 EOF 00:08:36.920 )") 00:08:36.921 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:36.921 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:36.921 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:36.921 15:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:36.921 "params": { 00:08:36.921 "name": "Nvme1", 00:08:36.921 "trtype": "tcp", 00:08:36.921 "traddr": "10.0.0.2", 00:08:36.921 "adrfam": "ipv4", 00:08:36.921 "trsvcid": "4420", 00:08:36.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:36.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:36.921 "hdgst": false, 00:08:36.921 "ddgst": false 00:08:36.921 }, 00:08:36.921 "method": "bdev_nvme_attach_controller" 00:08:36.921 }' 00:08:36.921 [2024-11-06 15:20:54.744043] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:08:36.921 [2024-11-06 15:20:54.744110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3617410 ] 00:08:36.921 [2024-11-06 15:20:54.840195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.921 [2024-11-06 15:20:54.893451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.181 Running I/O for 10 seconds... 00:08:39.508 6378.00 IOPS, 49.83 MiB/s [2024-11-06T14:20:58.431Z] 7191.50 IOPS, 56.18 MiB/s [2024-11-06T14:20:59.374Z] 8036.33 IOPS, 62.78 MiB/s [2024-11-06T14:21:00.316Z] 8459.50 IOPS, 66.09 MiB/s [2024-11-06T14:21:01.257Z] 8687.20 IOPS, 67.87 MiB/s [2024-11-06T14:21:02.200Z] 8851.83 IOPS, 69.15 MiB/s [2024-11-06T14:21:03.141Z] 8975.00 IOPS, 70.12 MiB/s [2024-11-06T14:21:04.525Z] 9071.50 IOPS, 70.87 MiB/s [2024-11-06T14:21:05.467Z] 9141.33 IOPS, 71.42 MiB/s [2024-11-06T14:21:05.467Z] 9198.10 IOPS, 71.86 MiB/s 00:08:47.484 Latency(us) 00:08:47.484 [2024-11-06T14:21:05.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.485 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:47.485 Verification LBA range: start 0x0 length 0x1000 00:08:47.485 Nvme1n1 : 10.01 9202.53 71.89 0.00 0.00 13863.88 1774.93 29272.75 00:08:47.485 [2024-11-06T14:21:05.468Z] =================================================================================================================== 00:08:47.485 [2024-11-06T14:21:05.468Z] Total : 9202.53 71.89 0.00 0.00 13863.88 1774.93 29272.75 00:08:47.485 15:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3619604 00:08:47.485 15:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:47.485 15:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:47.485 15:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:47.485 15:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:47.485 15:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:47.485 15:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:47.485 15:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:47.485 15:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:47.485 { 00:08:47.485 "params": { 00:08:47.485 "name": "Nvme$subsystem", 00:08:47.485 "trtype": "$TEST_TRANSPORT", 00:08:47.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:47.485 "adrfam": "ipv4", 00:08:47.485 "trsvcid": "$NVMF_PORT", 00:08:47.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:47.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:47.485 "hdgst": ${hdgst:-false}, 00:08:47.485 "ddgst": ${ddgst:-false} 00:08:47.485 }, 00:08:47.485 "method": "bdev_nvme_attach_controller" 00:08:47.485 } 00:08:47.485 EOF 00:08:47.485 )") 00:08:47.485 [2024-11-06 15:21:05.241368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.485 [2024-11-06 15:21:05.241398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.485 15:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:47.485 15:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:47.485 15:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:47.485 15:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:47.485 "params": { 00:08:47.485 "name": "Nvme1", 00:08:47.485 "trtype": "tcp", 00:08:47.485 "traddr": "10.0.0.2", 00:08:47.485 "adrfam": "ipv4", 00:08:47.485 "trsvcid": "4420", 00:08:47.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:47.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:47.485 "hdgst": false, 00:08:47.485 "ddgst": false 00:08:47.485 }, 00:08:47.485 "method": "bdev_nvme_attach_controller" 00:08:47.485 }' 00:08:47.485 [2024-11-06 15:21:05.253360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.485 [2024-11-06 15:21:05.253368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.485 [2024-11-06 15:21:05.265387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.485 [2024-11-06 15:21:05.265394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.485 [2024-11-06 15:21:05.277416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.485 [2024-11-06 15:21:05.277427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.485 [2024-11-06 15:21:05.289446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.485 [2024-11-06 15:21:05.289454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.485 [2024-11-06 15:21:05.301477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.485 [2024-11-06 15:21:05.301485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.485 [2024-11-06 15:21:05.301767] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:08:47.485 [2024-11-06 15:21:05.301826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3619604 ] 00:08:47.485 [2024-11-06 15:21:05.313507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.485 [2024-11-06 15:21:05.313514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.485 [2024-11-06 15:21:05.325537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.485 [2024-11-06 15:21:05.325545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.485 [2024-11-06 15:21:05.337566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.485 [2024-11-06 15:21:05.337574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.485 [2024-11-06 15:21:05.349597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.485 [2024-11-06 15:21:05.349606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.485 [2024-11-06 15:21:05.361628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.485 [2024-11-06 15:21:05.361635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.485 [2024-11-06 15:21:05.373659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.485 [2024-11-06 15:21:05.373667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.485 [2024-11-06 15:21:05.385688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.485 [2024-11-06 15:21:05.385696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.485 [2024-11-06 15:21:05.389005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.485 [2024-11-06 15:21:05.397719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.485 [2024-11-06 15:21:05.397728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.485 [2024-11-06 15:21:05.409753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.485 [2024-11-06 15:21:05.409763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.485 [2024-11-06 15:21:05.418047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.485 [2024-11-06 15:21:05.421784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.485 [2024-11-06 15:21:05.421791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.485 [2024-11-06 15:21:05.433818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.485 [2024-11-06 15:21:05.433827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.485 [2024-11-06 15:21:05.445849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.485 [2024-11-06 15:21:05.445861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.485 [2024-11-06 15:21:05.457874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.485 [2024-11-06 15:21:05.457884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 [2024-11-06 15:21:05.469905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.469920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 [2024-11-06 15:21:05.481936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.481944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 [2024-11-06 15:21:05.493982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.493999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 [2024-11-06 15:21:05.506019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.506030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 [2024-11-06 15:21:05.518035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.518045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 [2024-11-06 15:21:05.530063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.530070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 [2024-11-06 15:21:05.542095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.542103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 [2024-11-06 15:21:05.554124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.554132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 [2024-11-06 15:21:05.566174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.566185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 [2024-11-06 15:21:05.578187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.578195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 [2024-11-06 15:21:05.590217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.590225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 [2024-11-06 15:21:05.602249] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.602257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 [2024-11-06 15:21:05.614284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.614293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 [2024-11-06 15:21:05.626313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.626321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 [2024-11-06 15:21:05.638344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.638351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 [2024-11-06 15:21:05.650375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.650383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 [2024-11-06 15:21:05.662406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.662415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 [2024-11-06 15:21:05.674435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.674443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 [2024-11-06 15:21:05.686465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.686472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 [2024-11-06 15:21:05.698497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.698508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 [2024-11-06 15:21:05.710763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.710777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.747 Running I/O for 5 seconds... 00:08:47.747 [2024-11-06 15:21:05.722562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.747 [2024-11-06 15:21:05.722574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.008 [2024-11-06 15:21:05.737866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.008 [2024-11-06 15:21:05.737885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.008 [2024-11-06 15:21:05.751544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.008 [2024-11-06 15:21:05.751560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.008 [2024-11-06 15:21:05.765174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.008 [2024-11-06 15:21:05.765194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.008 [2024-11-06 15:21:05.779166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.008 [2024-11-06 15:21:05.779182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.008 [2024-11-06 15:21:05.792719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.008 [2024-11-06 15:21:05.792735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.008 [2024-11-06 15:21:05.806354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.008 [2024-11-06 15:21:05.806373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.008 [2024-11-06 15:21:05.819390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.008 [2024-11-06 15:21:05.819409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.008 [2024-11-06 15:21:05.832136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.008 [2024-11-06 15:21:05.832154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.008 [2024-11-06 15:21:05.845044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.008 [2024-11-06 15:21:05.845059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.008 [2024-11-06 15:21:05.858500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.008 [2024-11-06 15:21:05.858519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.008 [2024-11-06 15:21:05.872453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.008 [2024-11-06 15:21:05.872469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.008 [2024-11-06 15:21:05.885459] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.008 [2024-11-06 15:21:05.885478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.008 [2024-11-06 15:21:05.898882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.008 [2024-11-06 15:21:05.898897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.008 [2024-11-06 15:21:05.911977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.008 [2024-11-06 15:21:05.911992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.008 [2024-11-06 15:21:05.925632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.008 [2024-11-06 15:21:05.925648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.008 [2024-11-06 15:21:05.938810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.008 [2024-11-06 15:21:05.938825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.008 [2024-11-06 15:21:05.952289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.009 [2024-11-06 15:21:05.952304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.009 [2024-11-06 15:21:05.965105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.009 [2024-11-06 15:21:05.965121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.009 [2024-11-06 15:21:05.978547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.009 [2024-11-06 15:21:05.978564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.270 [2024-11-06 15:21:05.991976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.270 [2024-11-06 15:21:05.991992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.270 [2024-11-06 15:21:06.004739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.270 [2024-11-06 15:21:06.004761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.270 [2024-11-06 15:21:06.018007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.270 [2024-11-06 15:21:06.018023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.270 [2024-11-06 15:21:06.031373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.270 [2024-11-06 15:21:06.031389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.270 [2024-11-06 15:21:06.044443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.270 [2024-11-06 15:21:06.044459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.270 [2024-11-06 15:21:06.057849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.270 [2024-11-06 15:21:06.057864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.270 [2024-11-06 15:21:06.071126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.270 [2024-11-06 15:21:06.071141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.270 [2024-11-06 15:21:06.085264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.270 [2024-11-06 15:21:06.085281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.270 [2024-11-06 15:21:06.098487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.270 [2024-11-06 15:21:06.098506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.270 [2024-11-06 15:21:06.111308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.270 [2024-11-06 15:21:06.111323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.270 [2024-11-06 15:21:06.124738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.270 [2024-11-06 15:21:06.124758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.270 [2024-11-06 15:21:06.137981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.270 [2024-11-06 15:21:06.137996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.270 [2024-11-06 15:21:06.151114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.270 [2024-11-06 15:21:06.151137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.270 [2024-11-06 15:21:06.164640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.270 [2024-11-06 15:21:06.164655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.270 [2024-11-06 15:21:06.177979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.270 [2024-11-06 15:21:06.177994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.270 [2024-11-06 15:21:06.191298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.270 [2024-11-06 15:21:06.191313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.270 [2024-11-06 15:21:06.204638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.270 [2024-11-06 15:21:06.204653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.270 [2024-11-06 15:21:06.218171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.270 [2024-11-06 15:21:06.218187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.270 [2024-11-06 15:21:06.231415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.270 [2024-11-06 15:21:06.231430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.270 [2024-11-06 15:21:06.245138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.270 [2024-11-06 15:21:06.245154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 [2024-11-06 15:21:06.258167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-06 15:21:06.258183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 [2024-11-06 15:21:06.271624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-06 15:21:06.271640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 [2024-11-06 15:21:06.285205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-06 15:21:06.285222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 [2024-11-06 15:21:06.298822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-06 15:21:06.298841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 [2024-11-06 15:21:06.311856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-06 15:21:06.311875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 [2024-11-06 15:21:06.325560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-06 15:21:06.325577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 [2024-11-06 15:21:06.338654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-06 15:21:06.338669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 [2024-11-06 15:21:06.352292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-06 15:21:06.352308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 [2024-11-06 15:21:06.365810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-06 15:21:06.365826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 [2024-11-06 15:21:06.378841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-06 15:21:06.378857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 [2024-11-06 15:21:06.392613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-06 15:21:06.392629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 [2024-11-06 15:21:06.405552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-06 15:21:06.405568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 [2024-11-06 15:21:06.418763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-06 15:21:06.418779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 [2024-11-06 15:21:06.432129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-06 15:21:06.432148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 [2024-11-06 15:21:06.445523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-06 15:21:06.445538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 [2024-11-06 15:21:06.458513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-06 15:21:06.458529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 [2024-11-06 15:21:06.471812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-06 15:21:06.471828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 [2024-11-06 15:21:06.485233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-06 15:21:06.485248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.531 [2024-11-06 15:21:06.498566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.531 [2024-11-06 15:21:06.498581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 [2024-11-06 15:21:06.512362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-06 15:21:06.512378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 [2024-11-06 15:21:06.525879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-06 15:21:06.525894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 [2024-11-06 15:21:06.539035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-06 15:21:06.539050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 [2024-11-06 15:21:06.552248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-06 15:21:06.552266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 [2024-11-06 15:21:06.565608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-06 15:21:06.565623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 [2024-11-06 15:21:06.578497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-06 15:21:06.578513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 [2024-11-06 15:21:06.592141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-06 15:21:06.592157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 [2024-11-06 15:21:06.605252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-06 15:21:06.605267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 [2024-11-06 15:21:06.618778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-06 15:21:06.618792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 [2024-11-06 15:21:06.632201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-06 15:21:06.632217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 [2024-11-06 15:21:06.645856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-06 15:21:06.645875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.792 [2024-11-06 15:21:06.659053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.792 [2024-11-06 15:21:06.659068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.793 [2024-11-06 15:21:06.672541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.793 [2024-11-06 15:21:06.672559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.793 [2024-11-06 15:21:06.685556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.793 [2024-11-06 15:21:06.685571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.793 [2024-11-06 15:21:06.699100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.793 [2024-11-06 15:21:06.699115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.793 [2024-11-06 15:21:06.712709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.793 [2024-11-06 15:21:06.712727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.793 18651.00 IOPS, 145.71 MiB/s [2024-11-06T14:21:06.776Z] [2024-11-06 15:21:06.725880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.793 [2024-11-06 15:21:06.725896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.793 [2024-11-06 15:21:06.738667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.793 [2024-11-06 15:21:06.738683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.793 [2024-11-06 15:21:06.751542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.793 [2024-11-06 15:21:06.751561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.793 [2024-11-06 15:21:06.764898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.793 [2024-11-06 15:21:06.764914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.054 [2024-11-06 15:21:06.778177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.054 [2024-11-06 15:21:06.778193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.054 [2024-11-06 15:21:06.791359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.054 [2024-11-06 15:21:06.791374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.054 [2024-11-06 15:21:06.804919] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.054 [2024-11-06 15:21:06.804935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.054 [2024-11-06 15:21:06.817417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.054 [2024-11-06 15:21:06.817434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.054 [2024-11-06 15:21:06.829880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.054 [2024-11-06 15:21:06.829896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.054 [2024-11-06 15:21:06.843548] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.054 [2024-11-06 15:21:06.843563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.054 [2024-11-06 15:21:06.856858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.054 [2024-11-06 15:21:06.856877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.054 [2024-11-06 15:21:06.870087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.054 [2024-11-06 15:21:06.870105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.054 [2024-11-06 15:21:06.883330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.054 [2024-11-06 15:21:06.883346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.054 [2024-11-06 15:21:06.896609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.055 [2024-11-06 15:21:06.896625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.055 [2024-11-06 15:21:06.910031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.055 [2024-11-06 15:21:06.910050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.055 [2024-11-06 15:21:06.923328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.055 [2024-11-06 15:21:06.923344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.055 [2024-11-06 15:21:06.936453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.055 [2024-11-06 15:21:06.936469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.055 [2024-11-06 15:21:06.950250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.055 [2024-11-06 15:21:06.950270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.055 [2024-11-06 15:21:06.963538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.055 [2024-11-06 15:21:06.963557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.055 [2024-11-06 15:21:06.976336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.055 [2024-11-06 15:21:06.976353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.055 [2024-11-06 15:21:06.989798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.055 [2024-11-06 15:21:06.989818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.055 [2024-11-06 15:21:07.003157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.055 [2024-11-06 15:21:07.003174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.055 [2024-11-06 15:21:07.016394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.055 [2024-11-06 15:21:07.016410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.055 [2024-11-06 15:21:07.030038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.055 [2024-11-06 15:21:07.030054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.316 [2024-11-06 15:21:07.043153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.316 [2024-11-06 15:21:07.043172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.316 [2024-11-06 15:21:07.055878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.316 [2024-11-06 15:21:07.055893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.316 [2024-11-06 15:21:07.069126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.316 [2024-11-06 15:21:07.069142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.316 [2024-11-06 15:21:07.082391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.316 [2024-11-06 15:21:07.082406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.316 [2024-11-06 15:21:07.095679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.316 [2024-11-06 15:21:07.095695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.316 [2024-11-06 15:21:07.109116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.316 [2024-11-06 15:21:07.109131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.316 [2024-11-06 15:21:07.122510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.316 [2024-11-06 15:21:07.122526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.316 [2024-11-06 15:21:07.135086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.316 [2024-11-06 15:21:07.135101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.316 [2024-11-06 15:21:07.147641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.316 [2024-11-06 15:21:07.147656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.316 [2024-11-06 15:21:07.160707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.316 [2024-11-06 15:21:07.160723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.316 [2024-11-06 15:21:07.174133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.316 [2024-11-06 15:21:07.174148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.316 [2024-11-06 15:21:07.187550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.316 [2024-11-06 15:21:07.187566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.316 [2024-11-06 15:21:07.200752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.316 [2024-11-06 15:21:07.200772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.316 [2024-11-06 15:21:07.214387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.316 [2024-11-06 15:21:07.214407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.316 [2024-11-06 15:21:07.227512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.316 [2024-11-06 15:21:07.227528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.316 [2024-11-06 15:21:07.241159] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.316 [2024-11-06 15:21:07.241175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.316 [2024-11-06 15:21:07.254405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.316 [2024-11-06 15:21:07.254420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.316 [2024-11-06 15:21:07.267695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.316 [2024-11-06 15:21:07.267714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.316 [2024-11-06 15:21:07.281210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.316 [2024-11-06 15:21:07.281226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.316 [2024-11-06 15:21:07.294159] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.316 [2024-11-06 15:21:07.294175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.577 [2024-11-06 15:21:07.306906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.577 [2024-11-06 15:21:07.306922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.577 [2024-11-06 15:21:07.320520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.577 [2024-11-06 15:21:07.320536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.577 [2024-11-06 15:21:07.333440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.577 [2024-11-06 15:21:07.333455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.577 [2024-11-06 15:21:07.345861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.577 [2024-11-06 15:21:07.345877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.577 [2024-11-06 15:21:07.359258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.577 [2024-11-06 15:21:07.359276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.577 [2024-11-06 15:21:07.372493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.577 [2024-11-06 15:21:07.372509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.577 [2024-11-06 15:21:07.385991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.577 [2024-11-06 15:21:07.386006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.577 [2024-11-06 15:21:07.399552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.577 [2024-11-06 15:21:07.399568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.577 [2024-11-06 15:21:07.413009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.577 [2024-11-06 15:21:07.413025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.577 [2024-11-06 15:21:07.426369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.577 [2024-11-06 15:21:07.426387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.577 [2024-11-06 15:21:07.439644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.577 [2024-11-06 15:21:07.439660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.577 [2024-11-06 15:21:07.453069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.577 [2024-11-06 15:21:07.453091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.577 [2024-11-06 15:21:07.466347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.577 [2024-11-06 15:21:07.466362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.577 [2024-11-06 15:21:07.479675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.577 [2024-11-06 15:21:07.479690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.577 [2024-11-06 15:21:07.492990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.577 [2024-11-06 15:21:07.493005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.577 [2024-11-06 15:21:07.506273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.577 [2024-11-06 15:21:07.506288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.577 [2024-11-06 15:21:07.519500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.577 [2024-11-06 15:21:07.519515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.577 [2024-11-06 15:21:07.533222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.577 [2024-11-06 15:21:07.533237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.577 [2024-11-06 15:21:07.546316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.577 [2024-11-06 15:21:07.546331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.838 [2024-11-06 15:21:07.559820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.838 [2024-11-06 15:21:07.559836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.838 [2024-11-06 15:21:07.572716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.838 [2024-11-06 15:21:07.572732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.838 [2024-11-06 15:21:07.586122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.838 [2024-11-06 15:21:07.586138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.838 [2024-11-06 15:21:07.598809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.838 [2024-11-06 15:21:07.598825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.838 [2024-11-06 15:21:07.611850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.838 [2024-11-06 15:21:07.611866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.838 [2024-11-06 15:21:07.625178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.839 [2024-11-06 15:21:07.625194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.839 [2024-11-06 15:21:07.638713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.839 [2024-11-06 15:21:07.638728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.839 [2024-11-06 15:21:07.651082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.839 [2024-11-06 15:21:07.651097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.839 [2024-11-06 15:21:07.664691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.839 [2024-11-06 15:21:07.664706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.839 [2024-11-06 15:21:07.677536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.839 [2024-11-06 15:21:07.677554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.839 [2024-11-06 15:21:07.690685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.839 [2024-11-06 15:21:07.690701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.839 [2024-11-06 15:21:07.703841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.839 [2024-11-06 15:21:07.703944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.839 [2024-11-06 15:21:07.717845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.839 [2024-11-06 15:21:07.717860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.839 18763.50 IOPS, 146.59 MiB/s [2024-11-06T14:21:07.822Z] [2024-11-06 15:21:07.731294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.839 [2024-11-06 15:21:07.731309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.839 [2024-11-06 15:21:07.744611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.839 [2024-11-06 15:21:07.744626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.839 [2024-11-06 15:21:07.757474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.839 [2024-11-06 15:21:07.757490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.839 [2024-11-06 15:21:07.770811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.839 [2024-11-06 15:21:07.770826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.839 [2024-11-06 15:21:07.783428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.839 [2024-11-06 15:21:07.783443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.839 [2024-11-06 15:21:07.795920] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.839 [2024-11-06 15:21:07.795935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.839 [2024-11-06 15:21:07.809290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.839 [2024-11-06 15:21:07.809309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.100 [2024-11-06 15:21:07.822684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.100 [2024-11-06 15:21:07.822700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.100 [2024-11-06 15:21:07.836351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.100 [2024-11-06 15:21:07.836366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.100 [2024-11-06 15:21:07.849696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.100 [2024-11-06 15:21:07.849710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.100 [2024-11-06 15:21:07.863074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.100 [2024-11-06 15:21:07.863089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.100 [2024-11-06 15:21:07.876189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.100 [2024-11-06 15:21:07.876204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.100 [2024-11-06 15:21:07.889299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.100 [2024-11-06 15:21:07.889314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.100 [2024-11-06 15:21:07.902374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.100 [2024-11-06 15:21:07.902390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.100 [2024-11-06 15:21:07.915603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.100 [2024-11-06 15:21:07.915623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.100 [2024-11-06 15:21:07.928664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.100 [2024-11-06 15:21:07.928682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.100 [2024-11-06 15:21:07.942291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.100 [2024-11-06 15:21:07.942306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.100 [2024-11-06 15:21:07.955309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.100 [2024-11-06 15:21:07.955324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.100 [2024-11-06 15:21:07.968716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.100 [2024-11-06 15:21:07.968731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.100 [2024-11-06 15:21:07.981583] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.100 [2024-11-06 15:21:07.981597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.100 [2024-11-06 15:21:07.994028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.100 [2024-11-06 15:21:07.994043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.100 [2024-11-06 15:21:08.006416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.100 [2024-11-06 15:21:08.006431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.100 [2024-11-06 15:21:08.019265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.100 [2024-11-06 15:21:08.019280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.100 [2024-11-06 15:21:08.031910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.100 [2024-11-06 15:21:08.031926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.100 [2024-11-06 15:21:08.045431] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.100 [2024-11-06 15:21:08.045447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.100 [2024-11-06 15:21:08.058956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.100 [2024-11-06 15:21:08.058972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.100 [2024-11-06 15:21:08.072446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.100 [2024-11-06 15:21:08.072462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.359 [2024-11-06 15:21:08.085519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.359 [2024-11-06 15:21:08.085537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.359 [2024-11-06 15:21:08.098953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.359 [2024-11-06 15:21:08.098968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.359 [2024-11-06 15:21:08.112277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.359 [2024-11-06 15:21:08.112292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.359 [2024-11-06 15:21:08.125551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.359 [2024-11-06 15:21:08.125567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.359 [2024-11-06 15:21:08.138792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.359 [2024-11-06 15:21:08.138809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.359 [2024-11-06 15:21:08.151234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.359 [2024-11-06 15:21:08.151249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.359 [2024-11-06 15:21:08.164467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.359 [2024-11-06 15:21:08.164484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.359 [2024-11-06 15:21:08.176932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.359 [2024-11-06 15:21:08.176947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.359 [2024-11-06 15:21:08.189488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.359 [2024-11-06 15:21:08.189503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.359 [2024-11-06 15:21:08.202423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.359 [2024-11-06 15:21:08.202439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.359 [2024-11-06 15:21:08.215767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.359 [2024-11-06 15:21:08.215782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.359 [2024-11-06 15:21:08.229351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.359 [2024-11-06 15:21:08.229369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.359 [2024-11-06 15:21:08.241738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.359 [2024-11-06 15:21:08.241762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.359 [2024-11-06 15:21:08.254937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.359 [2024-11-06 15:21:08.254952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.359 [2024-11-06 15:21:08.268142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.359 [2024-11-06 15:21:08.268158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.359 [2024-11-06 15:21:08.281425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.359 [2024-11-06 15:21:08.281441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.359 [2024-11-06 15:21:08.294958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.359 [2024-11-06 15:21:08.294973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.359 [2024-11-06 15:21:08.308056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.359 [2024-11-06 15:21:08.308071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.359 [2024-11-06 15:21:08.321293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.359 [2024-11-06 15:21:08.321308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.359 [2024-11-06 15:21:08.334462] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.359 [2024-11-06 15:21:08.334477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.620 [2024-11-06 15:21:08.348144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.621 [2024-11-06 15:21:08.348159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.621 [2024-11-06 15:21:08.360811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.621 [2024-11-06 15:21:08.360829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.621 [2024-11-06 15:21:08.374481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.621 [2024-11-06 15:21:08.374496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.621 [2024-11-06 15:21:08.387556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.621 [2024-11-06 15:21:08.387571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.621 [2024-11-06 15:21:08.400986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.621 [2024-11-06 15:21:08.401001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.621 [2024-11-06 15:21:08.413685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.621 [2024-11-06 15:21:08.413700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.621 [2024-11-06 15:21:08.427654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.621 [2024-11-06 15:21:08.427671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.621 [2024-11-06 15:21:08.440771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.621 [2024-11-06 15:21:08.440791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.621 [2024-11-06 15:21:08.454048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.621 [2024-11-06 15:21:08.454064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.621 [2024-11-06 15:21:08.467315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.621 [2024-11-06 15:21:08.467331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.621 [2024-11-06 15:21:08.480575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.621 [2024-11-06 15:21:08.480591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.621 [2024-11-06 15:21:08.494107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.621 [2024-11-06 15:21:08.494126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.621 [2024-11-06 15:21:08.507295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.621 [2024-11-06 15:21:08.507311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.621 [2024-11-06 15:21:08.520624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.621 [2024-11-06 15:21:08.520641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.621 [2024-11-06 15:21:08.534026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.621 [2024-11-06 15:21:08.534041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.621 [2024-11-06 15:21:08.547353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.621 [2024-11-06 15:21:08.547368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.621 [2024-11-06 15:21:08.560573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.621 [2024-11-06 15:21:08.560589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.621 [2024-11-06 15:21:08.573481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.621 [2024-11-06 15:21:08.573496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.621 [2024-11-06 15:21:08.587303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.621 [2024-11-06 15:21:08.587318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.621 [2024-11-06 15:21:08.600025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.621 [2024-11-06 15:21:08.600041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.883 [2024-11-06 15:21:08.612706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.883 [2024-11-06 15:21:08.612722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.883 [2024-11-06 15:21:08.626042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.883 [2024-11-06 15:21:08.626060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.883 [2024-11-06 15:21:08.639243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.883 [2024-11-06 15:21:08.639258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.883 [2024-11-06 15:21:08.652389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.883 [2024-11-06 15:21:08.652405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.883 [2024-11-06 15:21:08.665658] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.883 [2024-11-06 15:21:08.665673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.883 [2024-11-06 15:21:08.678294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.883 [2024-11-06 15:21:08.678310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.883 [2024-11-06 15:21:08.692107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.883 [2024-11-06 15:21:08.692127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.883 [2024-11-06 15:21:08.705038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.883 [2024-11-06 15:21:08.705053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.883 [2024-11-06 15:21:08.717715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.883 [2024-11-06 15:21:08.717730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.883 18823.33 IOPS, 147.06 MiB/s [2024-11-06T14:21:08.866Z] [2024-11-06 15:21:08.731435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.883 [2024-11-06 15:21:08.731451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.883 [2024-11-06 15:21:08.744378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.883 [2024-11-06 15:21:08.744395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.883 [2024-11-06 15:21:08.757751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.883 [2024-11-06 15:21:08.757770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.883 [2024-11-06 15:21:08.771287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.883 [2024-11-06 15:21:08.771307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.883 [2024-11-06 15:21:08.784298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.883 [2024-11-06 15:21:08.784314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.883 [2024-11-06 15:21:08.797849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.883 [2024-11-06 15:21:08.797868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.883 [2024-11-06 15:21:08.810484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.883 [2024-11-06 15:21:08.810500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.883 [2024-11-06 15:21:08.823494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.883 [2024-11-06 15:21:08.823511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.883 [2024-11-06 15:21:08.836716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.883 [2024-11-06 15:21:08.836731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.883 [2024-11-06 15:21:08.849939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.883 [2024-11-06 15:21:08.849955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.883 [2024-11-06 15:21:08.863695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.883 [2024-11-06 15:21:08.863712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.144 [2024-11-06 15:21:08.876237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.144 [2024-11-06 15:21:08.876254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.144 [2024-11-06 15:21:08.888661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.144 [2024-11-06 15:21:08.888676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.144 [2024-11-06 15:21:08.902274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.144 [2024-11-06 15:21:08.902290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.144 [2024-11-06 15:21:08.915531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.144 [2024-11-06 15:21:08.915546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.144 [2024-11-06 15:21:08.928905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.144 [2024-11-06 15:21:08.928920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.144 [2024-11-06 15:21:08.941795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.144 [2024-11-06 15:21:08.941819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.144 [2024-11-06 15:21:08.954898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.144 [2024-11-06 15:21:08.954913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.144 [2024-11-06 15:21:08.967788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.144 [2024-11-06 15:21:08.967803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.144 [2024-11-06 15:21:08.981205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.144 [2024-11-06 15:21:08.981221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.144 [2024-11-06 15:21:08.994612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.144 [2024-11-06 15:21:08.994628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.144 [2024-11-06 15:21:09.008081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.144 [2024-11-06 15:21:09.008098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.144 [2024-11-06 15:21:09.021645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.144 [2024-11-06 15:21:09.021661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.144 [2024-11-06 15:21:09.034230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.144 [2024-11-06 15:21:09.034246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.144 [2024-11-06 15:21:09.046866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.144 [2024-11-06 15:21:09.046881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.144 [2024-11-06 15:21:09.060106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.144 [2024-11-06 15:21:09.060121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.144 [2024-11-06 15:21:09.073941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.144 [2024-11-06 15:21:09.073956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.145 [2024-11-06 15:21:09.086855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.145 [2024-11-06 15:21:09.086870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.145 [2024-11-06 15:21:09.100303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.145 [2024-11-06 15:21:09.100320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.145 [2024-11-06 15:21:09.114105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.145 [2024-11-06 15:21:09.114121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.406 [2024-11-06 15:21:09.126647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.406 [2024-11-06 15:21:09.126663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.406 [2024-11-06 15:21:09.139887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.406 [2024-11-06 15:21:09.139909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.406 [2024-11-06 15:21:09.153395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.406 [2024-11-06 15:21:09.153410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.406 [2024-11-06 15:21:09.166419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.406 [2024-11-06 15:21:09.166435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.406 [2024-11-06 15:21:09.180302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.406 [2024-11-06 15:21:09.180318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.406 [2024-11-06 15:21:09.193093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.406 [2024-11-06 15:21:09.193109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.406 [2024-11-06 15:21:09.206719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.406 [2024-11-06 15:21:09.206734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.406 [2024-11-06 15:21:09.220084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.406 [2024-11-06 15:21:09.220099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.406 [2024-11-06 15:21:09.233278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.406 [2024-11-06 15:21:09.233293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.406 [2024-11-06 15:21:09.246850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.406 [2024-11-06 15:21:09.246866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.406 [2024-11-06 15:21:09.260032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.406 [2024-11-06 15:21:09.260051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.406 [2024-11-06 15:21:09.273242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.406 [2024-11-06 15:21:09.273260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.406 [2024-11-06 15:21:09.286277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.406 [2024-11-06 15:21:09.286293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.406 [2024-11-06 15:21:09.299366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.406 [2024-11-06 15:21:09.299381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.406 [2024-11-06 15:21:09.312462] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.406 [2024-11-06 15:21:09.312478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.406 [2024-11-06 15:21:09.325826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.406 [2024-11-06 15:21:09.325841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.406 [2024-11-06 15:21:09.339273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.406 [2024-11-06 15:21:09.339288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.406 [2024-11-06 15:21:09.352297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.406 [2024-11-06 15:21:09.352312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.406 [2024-11-06 15:21:09.365481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.406 [2024-11-06 15:21:09.365496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.406 [2024-11-06 15:21:09.379161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.406 [2024-11-06 15:21:09.379176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.667 [2024-11-06 15:21:09.392487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.668 [2024-11-06 15:21:09.392503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.668 [2024-11-06 15:21:09.405946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.668 [2024-11-06 15:21:09.405962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.668 [2024-11-06 15:21:09.419142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.668 [2024-11-06 15:21:09.419157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.668 [2024-11-06 15:21:09.432728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.668 [2024-11-06 15:21:09.432752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.668 [2024-11-06 15:21:09.446251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.668 [2024-11-06 15:21:09.446273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.668 [2024-11-06 15:21:09.459381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.668 [2024-11-06 15:21:09.459401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.668 [2024-11-06 15:21:09.472664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.668 [2024-11-06 15:21:09.472680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.668 [2024-11-06 15:21:09.486300] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.668 [2024-11-06 15:21:09.486315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.668 [2024-11-06 15:21:09.499778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.668 [2024-11-06 15:21:09.499794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.668 [2024-11-06 15:21:09.512888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.668 [2024-11-06 15:21:09.512904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.668 [2024-11-06 15:21:09.526246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.668 [2024-11-06 15:21:09.526261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.668 [2024-11-06 15:21:09.539693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.668 [2024-11-06 15:21:09.539707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.668 [2024-11-06 15:21:09.552989] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.668 [2024-11-06 15:21:09.553005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.668 [2024-11-06 15:21:09.566396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.668 [2024-11-06 15:21:09.566411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.668 [2024-11-06 15:21:09.579740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.668 [2024-11-06 15:21:09.579764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.668 [2024-11-06 15:21:09.593154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.668 [2024-11-06 15:21:09.593172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.668 [2024-11-06 15:21:09.606322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.668 [2024-11-06 15:21:09.606338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.668 [2024-11-06 15:21:09.619722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.668 [2024-11-06 15:21:09.619737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.668 [2024-11-06 15:21:09.633101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.668 [2024-11-06 15:21:09.633116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.668 [2024-11-06 15:21:09.645853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.668 [2024-11-06 15:21:09.645869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.928 [2024-11-06 15:21:09.659116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.928 [2024-11-06 15:21:09.659133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.928 [2024-11-06 15:21:09.671867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.928 [2024-11-06 15:21:09.671885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.928 [2024-11-06 15:21:09.685462] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.928 [2024-11-06 15:21:09.685477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.928 [2024-11-06 15:21:09.698197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.928 [2024-11-06 15:21:09.698213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.928 [2024-11-06 15:21:09.711419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.928 [2024-11-06 15:21:09.711435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.928 [2024-11-06 15:21:09.724679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.928 [2024-11-06 15:21:09.724695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.928 18860.25 IOPS, 147.35 MiB/s [2024-11-06T14:21:09.911Z] [2024-11-06 15:21:09.738065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.928 [2024-11-06 15:21:09.738080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.928 [2024-11-06 15:21:09.751072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.928 [2024-11-06 15:21:09.751088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.928 [2024-11-06 15:21:09.764401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.928 [2024-11-06 15:21:09.764418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.928 [2024-11-06 15:21:09.777704] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.928 [2024-11-06 15:21:09.777719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.928 [2024-11-06 15:21:09.790354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.928 [2024-11-06 15:21:09.790369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.928 [2024-11-06 15:21:09.803430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.928 [2024-11-06 15:21:09.803449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.928 [2024-11-06 15:21:09.816582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.928 [2024-11-06 15:21:09.816598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.928 [2024-11-06 15:21:09.830240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.928 [2024-11-06 15:21:09.830256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.928 [2024-11-06 15:21:09.843483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.928 [2024-11-06 15:21:09.843498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.928 [2024-11-06 15:21:09.857104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.928 [2024-11-06 15:21:09.857119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.928 [2024-11-06 15:21:09.870287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.928 [2024-11-06 15:21:09.870302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.928 [2024-11-06 15:21:09.883371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.928 [2024-11-06 15:21:09.883387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.928 [2024-11-06 15:21:09.896698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.928 [2024-11-06 15:21:09.896713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.188 [2024-11-06 15:21:09.910031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.188 [2024-11-06 15:21:09.910050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.188 [2024-11-06 15:21:09.923526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.188 [2024-11-06 15:21:09.923544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.188 [2024-11-06 15:21:09.935952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.188 [2024-11-06 15:21:09.935974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.188 [2024-11-06 15:21:09.948899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.188 [2024-11-06 15:21:09.948916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.188 [2024-11-06 15:21:09.962292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.188 [2024-11-06 15:21:09.962307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.188 [2024-11-06 15:21:09.975560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.188 [2024-11-06 15:21:09.975576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.188 [2024-11-06 15:21:09.988844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.188 [2024-11-06 15:21:09.988859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.188 [2024-11-06 15:21:10.002197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.188 [2024-11-06 15:21:10.002671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.188 [2024-11-06 15:21:10.016219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.188 [2024-11-06 15:21:10.016242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.188 [2024-11-06 15:21:10.029178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.188 [2024-11-06 15:21:10.029196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.188 [2024-11-06 15:21:10.042160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.188 [2024-11-06 15:21:10.042176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.188 [2024-11-06 15:21:10.055633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.188 [2024-11-06 15:21:10.055649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.188 [2024-11-06 15:21:10.069527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.188 [2024-11-06 15:21:10.069544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.188 [2024-11-06 15:21:10.082590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.188 [2024-11-06 15:21:10.082607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.188 [2024-11-06 15:21:10.095710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.188 [2024-11-06 15:21:10.095731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.188 [2024-11-06 15:21:10.109539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.188 [2024-11-06 15:21:10.109555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.188 [2024-11-06 15:21:10.123959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.188 [2024-11-06 15:21:10.123977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.188 [2024-11-06 15:21:10.138767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.188 [2024-11-06 15:21:10.138785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.188 [2024-11-06 15:21:10.152386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.188 [2024-11-06 15:21:10.152402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.188 [2024-11-06 15:21:10.165620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.188 [2024-11-06 15:21:10.165637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.449 [2024-11-06 15:21:10.179138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.449 [2024-11-06 15:21:10.179158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.449 [2024-11-06 15:21:10.192416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.449 [2024-11-06 15:21:10.192438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.449 [2024-11-06 15:21:10.205678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.449 [2024-11-06 15:21:10.205693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.449 [2024-11-06 15:21:10.219002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.449 [2024-11-06 15:21:10.219017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.449 [2024-11-06 15:21:10.232285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.449 [2024-11-06 15:21:10.232301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.449 [2024-11-06 15:21:10.245042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.449 [2024-11-06 15:21:10.245057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.449 [2024-11-06 15:21:10.258463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.449 [2024-11-06 15:21:10.258479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.449 [2024-11-06 15:21:10.271414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.449 [2024-11-06 15:21:10.271429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.449 [2024-11-06 15:21:10.284854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.449 [2024-11-06 15:21:10.284870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.449 [2024-11-06 15:21:10.298053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.449 [2024-11-06 15:21:10.298068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.449 [2024-11-06 15:21:10.311440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.449 [2024-11-06 15:21:10.311456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.449 [2024-11-06 15:21:10.324445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.449 [2024-11-06 15:21:10.324460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.449 [2024-11-06 15:21:10.337935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.449 [2024-11-06 15:21:10.337953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.449 [2024-11-06 15:21:10.351459] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.449 [2024-11-06 15:21:10.351474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.449 [2024-11-06 15:21:10.364588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.449 [2024-11-06 15:21:10.364603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.449 [2024-11-06 15:21:10.378010] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.449 [2024-11-06 15:21:10.378026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.449 [2024-11-06 15:21:10.391732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.449 [2024-11-06 15:21:10.391754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.449 [2024-11-06 15:21:10.405031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.449 [2024-11-06 15:21:10.405046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.449 [2024-11-06 15:21:10.418304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.449 [2024-11-06 15:21:10.418320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-06 15:21:10.431636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-06 15:21:10.431651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-06 15:21:10.444370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-06 15:21:10.444390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-06 15:21:10.457434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-06 15:21:10.457449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-06 15:21:10.470628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-06 15:21:10.470644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-06 15:21:10.484022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-06 15:21:10.484039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-06 15:21:10.497698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-06 15:21:10.497714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-06 15:21:10.511278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-06 15:21:10.511293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-06 15:21:10.524740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-06 15:21:10.524765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-06 15:21:10.538672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-06 15:21:10.538688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-06 15:21:10.552022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-06 15:21:10.552040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-06 15:21:10.564459] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-06 15:21:10.564474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-06 15:21:10.577703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-06 15:21:10.577719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-06 15:21:10.590579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-06 15:21:10.590594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-06 15:21:10.604026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-06 15:21:10.604041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-06 15:21:10.617722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-06 15:21:10.617741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-06 15:21:10.631016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-06 15:21:10.631032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-06 15:21:10.644264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-06 15:21:10.644279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-06 15:21:10.657837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-06 15:21:10.657853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-06 15:21:10.671373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-06 15:21:10.671388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-06 15:21:10.684628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-06 15:21:10.684643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.972 [2024-11-06 15:21:10.698185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.972 [2024-11-06 15:21:10.698208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.972 [2024-11-06 15:21:10.711027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.972 [2024-11-06 15:21:10.711042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.972 [2024-11-06 15:21:10.724356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.972 [2024-11-06 15:21:10.724372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.972 18841.40 IOPS, 147.20 MiB/s [2024-11-06T14:21:10.955Z] [2024-11-06 15:21:10.737418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.972 [2024-11-06 15:21:10.737434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.972 00:08:52.972 Latency(us) 00:08:52.972 [2024-11-06T14:21:10.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.972 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:52.972 Nvme1n1 : 5.01 18844.33 147.22 0.00 0.00 6785.91 2812.59 16274.77 00:08:52.972 [2024-11-06T14:21:10.955Z] =================================================================================================================== 00:08:52.972 [2024-11-06T14:21:10.955Z] Total : 18844.33 147.22 0.00 0.00 6785.91 2812.59 16274.77 00:08:52.972 [2024-11-06 15:21:10.746558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.972 [2024-11-06 15:21:10.746572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.972 [2024-11-06 15:21:10.758595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.972 [2024-11-06 15:21:10.758610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.972 [2024-11-06 15:21:10.770620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.972 [2024-11-06 15:21:10.770633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.972 [2024-11-06 15:21:10.782650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.972 [2024-11-06 15:21:10.782664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.972 [2024-11-06 15:21:10.794678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.972 [2024-11-06 15:21:10.794690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.972 [2024-11-06 15:21:10.806708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.972 [2024-11-06 15:21:10.806717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.972 [2024-11-06 15:21:10.818740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.972 [2024-11-06 15:21:10.818753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.973 [2024-11-06 15:21:10.830776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.973 [2024-11-06 15:21:10.830786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.973 [2024-11-06 15:21:10.842804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.973 [2024-11-06 15:21:10.842812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3619604) - No such process 00:08:52.973 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3619604 00:08:52.973 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.973 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.973 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.973 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.973 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:52.973 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.973 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.973 delay0 00:08:52.973 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.973 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:52.973 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.973 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.973 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.973 15:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:53.233 [2024-11-06 15:21:11.062926] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:01.367 Initializing NVMe Controllers 00:09:01.367 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:01.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:01.367 Initialization complete. Launching workers. 00:09:01.367 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 278, failed: 22108 00:09:01.367 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22304, failed to submit 82 00:09:01.367 success 22196, unsuccessful 108, failed 0 00:09:01.367 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:01.368 rmmod nvme_tcp 00:09:01.368 rmmod nvme_fabrics 00:09:01.368 rmmod nvme_keyring 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3617363 ']' 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3617363 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 3617363 ']' 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 3617363 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3617363 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3617363' 00:09:01.368 killing process with pid 3617363 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 3617363 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 3617363 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.368 15:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.750 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:02.750 00:09:02.750 real 0m34.778s 00:09:02.750 user 0m45.793s 00:09:02.750 sys 0m11.807s 00:09:02.750 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:02.750 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:02.750 ************************************ 00:09:02.750 END TEST nvmf_zcopy 00:09:02.750 ************************************ 00:09:02.750 15:21:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:02.750 15:21:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:02.750 15:21:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:02.750 15:21:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:02.750 ************************************ 00:09:02.750 START TEST nvmf_nmic 00:09:02.750 ************************************ 00:09:02.750 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:03.011 * Looking for test storage... 00:09:03.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:03.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.011 --rc genhtml_branch_coverage=1 00:09:03.011 --rc genhtml_function_coverage=1 00:09:03.011 --rc genhtml_legend=1 00:09:03.011 --rc geninfo_all_blocks=1 00:09:03.011 --rc geninfo_unexecuted_blocks=1 00:09:03.011 00:09:03.011 ' 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:03.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.011 --rc genhtml_branch_coverage=1 00:09:03.011 --rc genhtml_function_coverage=1 00:09:03.011 --rc genhtml_legend=1 00:09:03.011 --rc geninfo_all_blocks=1 00:09:03.011 --rc geninfo_unexecuted_blocks=1 00:09:03.011 00:09:03.011 ' 00:09:03.011 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:03.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.011 --rc genhtml_branch_coverage=1 00:09:03.011 --rc genhtml_function_coverage=1 00:09:03.011 --rc genhtml_legend=1 00:09:03.011 --rc geninfo_all_blocks=1 00:09:03.011 --rc geninfo_unexecuted_blocks=1 00:09:03.011 00:09:03.011 ' 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:03.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.012 --rc genhtml_branch_coverage=1 00:09:03.012 --rc genhtml_function_coverage=1 00:09:03.012 --rc genhtml_legend=1 00:09:03.012 --rc geninfo_all_blocks=1 00:09:03.012 --rc geninfo_unexecuted_blocks=1 00:09:03.012 00:09:03.012 ' 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:03.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:03.012 15:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:11.150 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:11.150 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:11.150 Found net devices under 0000:31:00.0: cvl_0_0 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:11.150 Found net devices under 0000:31:00.1: cvl_0_1 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:11.150 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:11.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:09:11.151 00:09:11.151 --- 10.0.0.2 ping statistics --- 00:09:11.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.151 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:11.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:09:11.151 00:09:11.151 --- 10.0.0.1 ping statistics --- 00:09:11.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.151 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3627031 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3627031 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 3627031 ']' 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:11.151 15:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.151 [2024-11-06 15:21:28.563717] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:09:11.151 [2024-11-06 15:21:28.563795] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.151 [2024-11-06 15:21:28.663078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:11.151 [2024-11-06 15:21:28.717721] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:11.151 [2024-11-06 15:21:28.717789] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:11.151 [2024-11-06 15:21:28.717797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:11.151 [2024-11-06 15:21:28.717805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:11.151 [2024-11-06 15:21:28.717815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:11.151 [2024-11-06 15:21:28.719820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.151 [2024-11-06 15:21:28.720051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.151 [2024-11-06 15:21:28.720051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:11.151 [2024-11-06 15:21:28.719892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.412 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:11.412 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:09:11.412 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:11.412 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:11.412 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.673 [2024-11-06 15:21:29.443642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.673 Malloc0 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.673 [2024-11-06 15:21:29.520665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:11.673 test case1: single bdev can't be used in multiple subsystems 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.673 [2024-11-06 15:21:29.556527] bdev.c:8462:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:11.673 [2024-11-06 15:21:29.556555] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:11.673 [2024-11-06 15:21:29.556563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.673 request: 00:09:11.673 { 00:09:11.673 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:11.673 "namespace": { 00:09:11.673 "bdev_name": "Malloc0", 00:09:11.673 "no_auto_visible": false, 00:09:11.673 "no_metadata": false 00:09:11.673 }, 00:09:11.673 "method": "nvmf_subsystem_add_ns", 00:09:11.673 "req_id": 1 00:09:11.673 } 00:09:11.673 Got JSON-RPC error response 00:09:11.673 response: 00:09:11.673 { 00:09:11.673 "code": -32602, 00:09:11.673 "message": "Invalid parameters" 00:09:11.673 } 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:11.673 Adding namespace failed - expected result. 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:11.673 test case2: host connect to nvmf target in multiple paths 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.673 [2024-11-06 15:21:29.568733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.673 15:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:13.585 15:21:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:14.967 15:21:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:14.967 15:21:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:09:14.967 15:21:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:14.967 15:21:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:14.967 15:21:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:09:16.880 15:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:16.880 15:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:16.880 15:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:16.880 15:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:16.880 15:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:16.880 15:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:09:16.880 15:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:16.880 [global] 00:09:16.880 thread=1 00:09:16.880 invalidate=1 00:09:16.880 rw=write 00:09:16.880 time_based=1 00:09:16.880 runtime=1 00:09:16.880 ioengine=libaio 00:09:16.880 direct=1 00:09:16.880 bs=4096 00:09:16.880 iodepth=1 00:09:16.880 norandommap=0 00:09:16.880 numjobs=1 00:09:16.880 00:09:16.880 verify_dump=1 00:09:16.880 verify_backlog=512 00:09:16.880 verify_state_save=0 00:09:16.880 do_verify=1 00:09:16.880 verify=crc32c-intel 00:09:16.880 [job0] 00:09:16.880 filename=/dev/nvme0n1 00:09:16.880 Could not set queue depth (nvme0n1) 00:09:17.140 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:17.140 fio-3.35 00:09:17.140 Starting 1 thread 00:09:18.522 00:09:18.522 job0: (groupid=0, jobs=1): err= 0: pid=3628576: Wed Nov 6 15:21:36 2024 00:09:18.522 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:18.522 slat (nsec): min=24939, max=43340, avg=25607.30, stdev=2061.85 00:09:18.522 clat (usec): min=717, max=1150, avg=980.55, stdev=60.26 00:09:18.522 lat (usec): min=743, max=1175, avg=1006.16, stdev=60.17 00:09:18.522 clat percentiles (usec): 00:09:18.522 | 1.00th=[ 807], 5.00th=[ 865], 10.00th=[ 898], 20.00th=[ 938], 00:09:18.522 | 30.00th=[ 963], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1004], 00:09:18.523 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1045], 95.00th=[ 1057], 00:09:18.523 | 99.00th=[ 1106], 99.50th=[ 1123], 99.90th=[ 1156], 99.95th=[ 1156], 00:09:18.523 | 99.99th=[ 1156] 00:09:18.523 write: IOPS=753, BW=3013KiB/s (3085kB/s)(3016KiB/1001msec); 0 zone resets 00:09:18.523 slat (nsec): min=9537, max=65301, avg=28972.32, stdev=9449.31 00:09:18.523 clat (usec): min=319, max=814, avg=601.29, stdev=90.99 00:09:18.523 lat (usec): min=331, max=847, avg=630.27, stdev=94.95 00:09:18.523 clat percentiles (usec): 00:09:18.523 | 1.00th=[ 359], 5.00th=[ 424], 10.00th=[ 474], 20.00th=[ 537], 00:09:18.523 | 30.00th=[ 570], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 635], 00:09:18.523 | 70.00th=[ 668], 80.00th=[ 676], 90.00th=[ 701], 95.00th=[ 725], 00:09:18.523 | 99.00th=[ 775], 99.50th=[ 791], 99.90th=[ 816], 99.95th=[ 816], 00:09:18.523 | 99.99th=[ 816] 00:09:18.523 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:18.523 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:18.523 lat (usec) : 500=8.85%, 750=49.29%, 1000=24.09% 00:09:18.523 lat (msec) : 2=17.77% 00:09:18.523 cpu : usr=2.20%, sys=3.30%, ctx=1266, majf=0, minf=1 00:09:18.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:18.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.523 issued rwts: total=512,754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:18.523 00:09:18.523 Run status group 0 (all jobs): 00:09:18.523 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:09:18.523 WRITE: bw=3013KiB/s (3085kB/s), 3013KiB/s-3013KiB/s (3085kB/s-3085kB/s), io=3016KiB (3088kB), run=1001-1001msec 00:09:18.523 00:09:18.523 Disk stats (read/write): 00:09:18.523 nvme0n1: ios=562/593, merge=0/0, ticks=535/337, in_queue=872, util=92.28% 00:09:18.523 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:18.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:18.523 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:18.523 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:09:18.523 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:18.523 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.523 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:18.523 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.523 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:09:18.523 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:18.523 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:18.523 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:18.523 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:18.523 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:18.523 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:18.523 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:18.523 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:18.523 rmmod nvme_tcp 00:09:18.523 rmmod nvme_fabrics 00:09:18.523 rmmod nvme_keyring 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3627031 ']' 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3627031 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 3627031 ']' 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 3627031 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3627031 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3627031' 00:09:18.783 killing process with pid 3627031 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 3627031 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 3627031 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.783 15:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.811 15:21:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:21.072 00:09:21.072 real 0m18.114s 00:09:21.072 user 0m48.745s 00:09:21.072 sys 0m6.644s 00:09:21.072 15:21:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:21.072 15:21:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:21.072 ************************************ 00:09:21.072 END TEST nvmf_nmic 00:09:21.072 ************************************ 00:09:21.072 15:21:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:21.072 15:21:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:21.072 15:21:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:21.072 15:21:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:21.072 ************************************ 00:09:21.072 START TEST nvmf_fio_target 00:09:21.072 ************************************ 00:09:21.072 15:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:21.072 * Looking for test storage... 00:09:21.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.072 15:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:21.072 15:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:21.072 15:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:21.072 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:21.072 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.072 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.072 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.072 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.072 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.072 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.072 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:21.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.334 --rc genhtml_branch_coverage=1 00:09:21.334 --rc genhtml_function_coverage=1 00:09:21.334 --rc genhtml_legend=1 00:09:21.334 --rc geninfo_all_blocks=1 00:09:21.334 --rc geninfo_unexecuted_blocks=1 00:09:21.334 00:09:21.334 ' 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:21.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.334 --rc genhtml_branch_coverage=1 00:09:21.334 --rc genhtml_function_coverage=1 00:09:21.334 --rc genhtml_legend=1 00:09:21.334 --rc geninfo_all_blocks=1 00:09:21.334 --rc geninfo_unexecuted_blocks=1 00:09:21.334 00:09:21.334 ' 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:21.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.334 --rc genhtml_branch_coverage=1 00:09:21.334 --rc genhtml_function_coverage=1 00:09:21.334 --rc genhtml_legend=1 00:09:21.334 --rc geninfo_all_blocks=1 00:09:21.334 --rc geninfo_unexecuted_blocks=1 00:09:21.334 00:09:21.334 ' 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:21.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.334 --rc genhtml_branch_coverage=1 00:09:21.334 --rc genhtml_function_coverage=1 00:09:21.334 --rc genhtml_legend=1 00:09:21.334 --rc geninfo_all_blocks=1 00:09:21.334 --rc geninfo_unexecuted_blocks=1 00:09:21.334 00:09:21.334 ' 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:21.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:21.334 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:21.335 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:21.335 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.335 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:21.335 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:21.335 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:21.335 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.335 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.335 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.335 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:21.335 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:21.335 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:21.335 15:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.468 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:29.469 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:29.469 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:29.469 Found net devices under 0000:31:00.0: cvl_0_0 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:29.469 Found net devices under 0000:31:00.1: cvl_0_1 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:29.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:09:29.469 00:09:29.469 --- 10.0.0.2 ping statistics --- 00:09:29.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.469 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:09:29.469 00:09:29.469 --- 10.0.0.1 ping statistics --- 00:09:29.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.469 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3632988 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3632988 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 3632988 ']' 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:29.469 15:21:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:29.469 [2024-11-06 15:21:46.776673] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:09:29.470 [2024-11-06 15:21:46.776743] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.470 [2024-11-06 15:21:46.878743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:29.470 [2024-11-06 15:21:46.932851] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.470 [2024-11-06 15:21:46.932906] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.470 [2024-11-06 15:21:46.932915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.470 [2024-11-06 15:21:46.932922] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.470 [2024-11-06 15:21:46.932928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.470 [2024-11-06 15:21:46.935018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.470 [2024-11-06 15:21:46.935179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.470 [2024-11-06 15:21:46.935339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.470 [2024-11-06 15:21:46.935339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.730 15:21:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:29.730 15:21:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:09:29.730 15:21:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:29.730 15:21:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:29.730 15:21:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:29.730 15:21:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.730 15:21:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:29.991 [2024-11-06 15:21:47.814246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.991 15:21:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:30.251 15:21:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:30.251 15:21:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:30.511 15:21:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:30.511 15:21:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:30.772 15:21:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:30.772 15:21:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:30.772 15:21:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:30.772 15:21:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:31.033 15:21:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:31.293 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:31.293 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:31.554 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:31.554 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:31.554 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:31.554 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:31.813 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:32.072 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:32.072 15:21:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:32.333 15:21:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:32.333 15:21:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:32.333 15:21:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.593 [2024-11-06 15:21:50.417511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.593 15:21:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:32.853 15:21:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:32.853 15:21:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:34.763 15:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:34.763 15:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:09:34.763 15:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:34.763 15:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:09:34.763 15:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:09:34.763 15:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:09:36.671 15:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:36.671 15:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:36.671 15:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:36.671 15:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:09:36.671 15:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:36.671 15:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:09:36.671 15:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:36.671 [global] 00:09:36.671 thread=1 00:09:36.671 invalidate=1 00:09:36.671 rw=write 00:09:36.671 time_based=1 00:09:36.671 runtime=1 00:09:36.671 ioengine=libaio 00:09:36.671 direct=1 00:09:36.671 bs=4096 00:09:36.671 iodepth=1 00:09:36.671 norandommap=0 00:09:36.671 numjobs=1 00:09:36.671 00:09:36.672 verify_dump=1 00:09:36.672 verify_backlog=512 00:09:36.672 verify_state_save=0 00:09:36.672 do_verify=1 00:09:36.672 verify=crc32c-intel 00:09:36.672 [job0] 00:09:36.672 filename=/dev/nvme0n1 00:09:36.672 [job1] 00:09:36.672 filename=/dev/nvme0n2 00:09:36.672 [job2] 00:09:36.672 filename=/dev/nvme0n3 00:09:36.672 [job3] 00:09:36.672 filename=/dev/nvme0n4 00:09:36.672 Could not set queue depth (nvme0n1) 00:09:36.672 Could not set queue depth (nvme0n2) 00:09:36.672 Could not set queue depth (nvme0n3) 00:09:36.672 Could not set queue depth (nvme0n4) 00:09:36.932 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.932 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.932 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.932 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:36.932 fio-3.35 00:09:36.932 Starting 4 threads 00:09:38.315 00:09:38.315 job0: (groupid=0, jobs=1): err= 0: pid=3634899: Wed Nov 6 15:21:56 2024 00:09:38.315 read: IOPS=155, BW=623KiB/s (638kB/s)(624KiB/1001msec) 00:09:38.315 slat (nsec): min=8134, max=39269, avg=25453.22, stdev=2206.91 00:09:38.315 clat (usec): min=735, max=41066, avg=4847.40, stdev=11816.94 00:09:38.315 lat (usec): min=743, max=41092, avg=4872.85, stdev=11816.88 00:09:38.315 clat percentiles (usec): 00:09:38.315 | 1.00th=[ 750], 5.00th=[ 848], 10.00th=[ 881], 20.00th=[ 930], 00:09:38.315 | 30.00th=[ 963], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1029], 00:09:38.315 | 70.00th=[ 1057], 80.00th=[ 1106], 90.00th=[ 1467], 95.00th=[41157], 00:09:38.315 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:38.315 | 99.99th=[41157] 00:09:38.316 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:38.316 slat (nsec): min=9277, max=52665, avg=30940.44, stdev=7371.49 00:09:38.316 clat (usec): min=121, max=769, avg=428.43, stdev=127.03 00:09:38.316 lat (usec): min=131, max=781, avg=459.37, stdev=128.51 00:09:38.316 clat percentiles (usec): 00:09:38.316 | 1.00th=[ 161], 5.00th=[ 243], 10.00th=[ 277], 20.00th=[ 310], 00:09:38.316 | 30.00th=[ 338], 40.00th=[ 392], 50.00th=[ 424], 60.00th=[ 461], 00:09:38.316 | 70.00th=[ 510], 80.00th=[ 545], 90.00th=[ 594], 95.00th=[ 635], 00:09:38.316 | 99.00th=[ 693], 99.50th=[ 742], 99.90th=[ 766], 99.95th=[ 766], 00:09:38.316 | 99.99th=[ 766] 00:09:38.316 bw ( KiB/s): min= 4087, max= 4087, per=41.11%, avg=4087.00, stdev= 0.00, samples=1 00:09:38.316 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:38.316 lat (usec) : 250=4.64%, 500=47.60%, 750=24.55%, 1000=9.43% 00:09:38.316 lat (msec) : 2=11.53%, 50=2.25% 00:09:38.316 cpu : usr=0.90%, sys=2.10%, ctx=668, majf=0, minf=1 00:09:38.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:38.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.316 issued rwts: total=156,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:38.316 job1: (groupid=0, jobs=1): err= 0: pid=3634900: Wed Nov 6 15:21:56 2024 00:09:38.316 read: IOPS=43, BW=174KiB/s (178kB/s)(176KiB/1010msec) 00:09:38.316 slat (nsec): min=7342, max=28987, avg=24708.45, stdev=3641.36 00:09:38.316 clat (usec): min=543, max=42044, avg=15239.59, stdev=19356.89 00:09:38.316 lat (usec): min=572, max=42069, avg=15264.30, stdev=19355.90 00:09:38.316 clat percentiles (usec): 00:09:38.316 | 1.00th=[ 545], 5.00th=[ 766], 10.00th=[ 873], 20.00th=[ 963], 00:09:38.316 | 30.00th=[ 996], 40.00th=[ 1020], 50.00th=[ 1057], 60.00th=[ 1418], 00:09:38.316 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:38.316 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:38.316 | 99.99th=[42206] 00:09:38.316 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:09:38.316 slat (nsec): min=9788, max=51843, avg=30222.61, stdev=8766.02 00:09:38.316 clat (usec): min=248, max=1015, avg=623.22, stdev=127.75 00:09:38.316 lat (usec): min=258, max=1049, avg=653.44, stdev=131.20 00:09:38.316 clat percentiles (usec): 00:09:38.316 | 1.00th=[ 293], 5.00th=[ 400], 10.00th=[ 453], 20.00th=[ 515], 00:09:38.316 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 668], 00:09:38.316 | 70.00th=[ 693], 80.00th=[ 717], 90.00th=[ 783], 95.00th=[ 824], 00:09:38.316 | 99.00th=[ 898], 99.50th=[ 979], 99.90th=[ 1012], 99.95th=[ 1012], 00:09:38.316 | 99.99th=[ 1012] 00:09:38.316 bw ( KiB/s): min= 4087, max= 4087, per=41.11%, avg=4087.00, stdev= 0.00, samples=1 00:09:38.316 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:38.316 lat (usec) : 250=0.18%, 500=16.19%, 750=62.41%, 1000=15.65% 00:09:38.316 lat (msec) : 2=2.70%, 20=0.18%, 50=2.70% 00:09:38.316 cpu : usr=1.09%, sys=1.29%, ctx=556, majf=0, minf=1 00:09:38.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:38.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.316 issued rwts: total=44,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:38.316 job2: (groupid=0, jobs=1): err= 0: pid=3634901: Wed Nov 6 15:21:56 2024 00:09:38.316 read: IOPS=686, BW=2745KiB/s (2811kB/s)(2748KiB/1001msec) 00:09:38.316 slat (nsec): min=7037, max=73055, avg=24906.96, stdev=7139.68 00:09:38.316 clat (usec): min=245, max=1025, avg=769.53, stdev=108.40 00:09:38.316 lat (usec): min=253, max=1050, avg=794.43, stdev=109.94 00:09:38.316 clat percentiles (usec): 00:09:38.316 | 1.00th=[ 433], 5.00th=[ 578], 10.00th=[ 627], 20.00th=[ 693], 00:09:38.316 | 30.00th=[ 734], 40.00th=[ 758], 50.00th=[ 783], 60.00th=[ 816], 00:09:38.316 | 70.00th=[ 832], 80.00th=[ 857], 90.00th=[ 889], 95.00th=[ 914], 00:09:38.316 | 99.00th=[ 963], 99.50th=[ 996], 99.90th=[ 1029], 99.95th=[ 1029], 00:09:38.316 | 99.99th=[ 1029] 00:09:38.316 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:38.316 slat (nsec): min=10053, max=72343, avg=31855.62, stdev=9294.51 00:09:38.316 clat (usec): min=129, max=707, avg=399.25, stdev=106.04 00:09:38.316 lat (usec): min=139, max=758, avg=431.11, stdev=107.75 00:09:38.316 clat percentiles (usec): 00:09:38.316 | 1.00th=[ 194], 5.00th=[ 229], 10.00th=[ 289], 20.00th=[ 314], 00:09:38.316 | 30.00th=[ 326], 40.00th=[ 347], 50.00th=[ 400], 60.00th=[ 424], 00:09:38.316 | 70.00th=[ 457], 80.00th=[ 502], 90.00th=[ 545], 95.00th=[ 586], 00:09:38.316 | 99.00th=[ 635], 99.50th=[ 668], 99.90th=[ 709], 99.95th=[ 709], 00:09:38.316 | 99.99th=[ 709] 00:09:38.316 bw ( KiB/s): min= 4087, max= 4087, per=41.11%, avg=4087.00, stdev= 0.00, samples=1 00:09:38.316 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:38.316 lat (usec) : 250=4.03%, 500=44.48%, 750=26.71%, 1000=24.61% 00:09:38.316 lat (msec) : 2=0.18% 00:09:38.316 cpu : usr=2.80%, sys=4.90%, ctx=1714, majf=0, minf=1 00:09:38.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:38.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.316 issued rwts: total=687,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:38.316 job3: (groupid=0, jobs=1): err= 0: pid=3634902: Wed Nov 6 15:21:56 2024 00:09:38.316 read: IOPS=17, BW=69.9KiB/s (71.6kB/s)(72.0KiB/1030msec) 00:09:38.316 slat (nsec): min=26246, max=31810, avg=28344.44, stdev=2374.53 00:09:38.316 clat (usec): min=1130, max=44878, avg=39378.23, stdev=9591.58 00:09:38.316 lat (usec): min=1157, max=44909, avg=39406.58, stdev=9592.02 00:09:38.316 clat percentiles (usec): 00:09:38.316 | 1.00th=[ 1139], 5.00th=[ 1139], 10.00th=[41157], 20.00th=[41157], 00:09:38.316 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:09:38.316 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[44827], 00:09:38.316 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:09:38.316 | 99.99th=[44827] 00:09:38.316 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:09:38.316 slat (nsec): min=9388, max=54155, avg=29785.36, stdev=9653.38 00:09:38.316 clat (usec): min=232, max=827, avg=590.38, stdev=103.89 00:09:38.316 lat (usec): min=242, max=845, avg=620.16, stdev=107.81 00:09:38.316 clat percentiles (usec): 00:09:38.316 | 1.00th=[ 347], 5.00th=[ 408], 10.00th=[ 449], 20.00th=[ 510], 00:09:38.316 | 30.00th=[ 545], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 619], 00:09:38.316 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 717], 95.00th=[ 758], 00:09:38.316 | 99.00th=[ 799], 99.50th=[ 807], 99.90th=[ 832], 99.95th=[ 832], 00:09:38.316 | 99.99th=[ 832] 00:09:38.316 bw ( KiB/s): min= 4096, max= 4096, per=41.20%, avg=4096.00, stdev= 0.00, samples=1 00:09:38.316 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:38.316 lat (usec) : 250=0.19%, 500=17.36%, 750=72.64%, 1000=6.42% 00:09:38.316 lat (msec) : 2=0.19%, 50=3.21% 00:09:38.316 cpu : usr=0.58%, sys=2.24%, ctx=530, majf=0, minf=1 00:09:38.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:38.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.316 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:38.316 00:09:38.316 Run status group 0 (all jobs): 00:09:38.316 READ: bw=3515KiB/s (3599kB/s), 69.9KiB/s-2745KiB/s (71.6kB/s-2811kB/s), io=3620KiB (3707kB), run=1001-1030msec 00:09:38.316 WRITE: bw=9942KiB/s (10.2MB/s), 1988KiB/s-4092KiB/s (2036kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1030msec 00:09:38.316 00:09:38.316 Disk stats (read/write): 00:09:38.316 nvme0n1: ios=66/512, merge=0/0, ticks=629/210, in_queue=839, util=87.27% 00:09:38.316 nvme0n2: ios=47/512, merge=0/0, ticks=552/304, in_queue=856, util=88.06% 00:09:38.316 nvme0n3: ios=534/983, merge=0/0, ticks=1300/374, in_queue=1674, util=97.37% 00:09:38.316 nvme0n4: ios=13/512, merge=0/0, ticks=504/244, in_queue=748, util=89.56% 00:09:38.316 15:21:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:38.316 [global] 00:09:38.316 thread=1 00:09:38.316 invalidate=1 00:09:38.316 rw=randwrite 00:09:38.316 time_based=1 00:09:38.316 runtime=1 00:09:38.316 ioengine=libaio 00:09:38.316 direct=1 00:09:38.316 bs=4096 00:09:38.316 iodepth=1 00:09:38.316 norandommap=0 00:09:38.316 numjobs=1 00:09:38.316 00:09:38.316 verify_dump=1 00:09:38.316 verify_backlog=512 00:09:38.316 verify_state_save=0 00:09:38.316 do_verify=1 00:09:38.316 verify=crc32c-intel 00:09:38.316 [job0] 00:09:38.316 filename=/dev/nvme0n1 00:09:38.316 [job1] 00:09:38.316 filename=/dev/nvme0n2 00:09:38.316 [job2] 00:09:38.316 filename=/dev/nvme0n3 00:09:38.316 [job3] 00:09:38.316 filename=/dev/nvme0n4 00:09:38.316 Could not set queue depth (nvme0n1) 00:09:38.316 Could not set queue depth (nvme0n2) 00:09:38.316 Could not set queue depth (nvme0n3) 00:09:38.316 Could not set queue depth (nvme0n4) 00:09:38.577 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.577 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.577 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.577 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.577 fio-3.35 00:09:38.577 Starting 4 threads 00:09:39.959 00:09:39.959 job0: (groupid=0, jobs=1): err= 0: pid=3635418: Wed Nov 6 15:21:57 2024 00:09:39.959 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:39.959 slat (nsec): min=24013, max=58613, avg=25515.34, stdev=3508.07 00:09:39.959 clat (usec): min=749, max=1418, avg=1057.18, stdev=77.76 00:09:39.959 lat (usec): min=774, max=1443, avg=1082.70, stdev=77.78 00:09:39.959 clat percentiles (usec): 00:09:39.959 | 1.00th=[ 840], 5.00th=[ 914], 10.00th=[ 955], 20.00th=[ 1004], 00:09:39.959 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1057], 60.00th=[ 1074], 00:09:39.959 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:09:39.959 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1418], 99.95th=[ 1418], 00:09:39.959 | 99.99th=[ 1418] 00:09:39.959 write: IOPS=807, BW=3229KiB/s (3306kB/s)(3232KiB/1001msec); 0 zone resets 00:09:39.959 slat (nsec): min=9172, max=80895, avg=27625.57, stdev=9142.61 00:09:39.959 clat (usec): min=197, max=879, avg=511.79, stdev=107.59 00:09:39.959 lat (usec): min=228, max=911, avg=539.41, stdev=111.56 00:09:39.959 clat percentiles (usec): 00:09:39.959 | 1.00th=[ 273], 5.00th=[ 338], 10.00th=[ 367], 20.00th=[ 433], 00:09:39.959 | 30.00th=[ 457], 40.00th=[ 482], 50.00th=[ 506], 60.00th=[ 537], 00:09:39.959 | 70.00th=[ 570], 80.00th=[ 603], 90.00th=[ 652], 95.00th=[ 685], 00:09:39.959 | 99.00th=[ 791], 99.50th=[ 832], 99.90th=[ 881], 99.95th=[ 881], 00:09:39.959 | 99.99th=[ 881] 00:09:39.959 bw ( KiB/s): min= 4096, max= 4096, per=45.30%, avg=4096.00, stdev= 0.00, samples=1 00:09:39.959 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:39.959 lat (usec) : 250=0.23%, 500=29.39%, 750=30.76%, 1000=8.33% 00:09:39.959 lat (msec) : 2=31.29% 00:09:39.959 cpu : usr=1.90%, sys=3.70%, ctx=1321, majf=0, minf=1 00:09:39.960 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.960 issued rwts: total=512,808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.960 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.960 job1: (groupid=0, jobs=1): err= 0: pid=3635419: Wed Nov 6 15:21:57 2024 00:09:39.960 read: IOPS=17, BW=70.5KiB/s (72.2kB/s)(72.0KiB/1021msec) 00:09:39.960 slat (nsec): min=27162, max=28290, avg=27756.28, stdev=342.13 00:09:39.960 clat (usec): min=40872, max=41219, avg=40969.99, stdev=73.30 00:09:39.960 lat (usec): min=40900, max=41246, avg=40997.75, stdev=73.24 00:09:39.960 clat percentiles (usec): 00:09:39.960 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:39.960 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:39.960 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:39.960 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:39.960 | 99.99th=[41157] 00:09:39.960 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:09:39.960 slat (nsec): min=9024, max=54486, avg=32761.21, stdev=8493.72 00:09:39.960 clat (usec): min=133, max=784, avg=510.21, stdev=125.97 00:09:39.960 lat (usec): min=169, max=826, avg=542.97, stdev=129.35 00:09:39.960 clat percentiles (usec): 00:09:39.960 | 1.00th=[ 212], 5.00th=[ 277], 10.00th=[ 351], 20.00th=[ 392], 00:09:39.960 | 30.00th=[ 449], 40.00th=[ 490], 50.00th=[ 519], 60.00th=[ 553], 00:09:39.960 | 70.00th=[ 586], 80.00th=[ 627], 90.00th=[ 668], 95.00th=[ 693], 00:09:39.960 | 99.00th=[ 725], 99.50th=[ 742], 99.90th=[ 783], 99.95th=[ 783], 00:09:39.960 | 99.99th=[ 783] 00:09:39.960 bw ( KiB/s): min= 4096, max= 4096, per=45.30%, avg=4096.00, stdev= 0.00, samples=1 00:09:39.960 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:39.960 lat (usec) : 250=2.26%, 500=38.87%, 750=55.09%, 1000=0.38% 00:09:39.960 lat (msec) : 50=3.40% 00:09:39.960 cpu : usr=1.47%, sys=1.76%, ctx=532, majf=0, minf=1 00:09:39.960 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.960 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.960 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.960 job2: (groupid=0, jobs=1): err= 0: pid=3635420: Wed Nov 6 15:21:57 2024 00:09:39.960 read: IOPS=16, BW=65.6KiB/s (67.1kB/s)(68.0KiB/1037msec) 00:09:39.960 slat (nsec): min=27449, max=32864, avg=28481.29, stdev=1611.84 00:09:39.960 clat (usec): min=40878, max=42150, avg=41688.08, stdev=433.53 00:09:39.960 lat (usec): min=40908, max=42183, avg=41716.56, stdev=433.17 00:09:39.960 clat percentiles (usec): 00:09:39.960 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:39.960 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:09:39.960 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:39.960 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:39.960 | 99.99th=[42206] 00:09:39.960 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:09:39.960 slat (nsec): min=9407, max=69844, avg=30750.99, stdev=10372.85 00:09:39.960 clat (usec): min=239, max=869, avg=600.37, stdev=120.17 00:09:39.960 lat (usec): min=250, max=904, avg=631.13, stdev=124.75 00:09:39.960 clat percentiles (usec): 00:09:39.960 | 1.00th=[ 330], 5.00th=[ 375], 10.00th=[ 437], 20.00th=[ 486], 00:09:39.960 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 644], 00:09:39.960 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 766], 00:09:39.960 | 99.00th=[ 816], 99.50th=[ 840], 99.90th=[ 873], 99.95th=[ 873], 00:09:39.960 | 99.99th=[ 873] 00:09:39.960 bw ( KiB/s): min= 4096, max= 4096, per=45.30%, avg=4096.00, stdev= 0.00, samples=1 00:09:39.960 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:39.960 lat (usec) : 250=0.19%, 500=21.93%, 750=65.97%, 1000=8.70% 00:09:39.960 lat (msec) : 50=3.21% 00:09:39.960 cpu : usr=1.45%, sys=1.54%, ctx=530, majf=0, minf=1 00:09:39.960 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.960 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.960 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.960 job3: (groupid=0, jobs=1): err= 0: pid=3635421: Wed Nov 6 15:21:57 2024 00:09:39.960 read: IOPS=16, BW=67.7KiB/s (69.3kB/s)(68.0KiB/1005msec) 00:09:39.960 slat (nsec): min=26121, max=31101, avg=26746.82, stdev=1158.90 00:09:39.960 clat (usec): min=1050, max=45022, avg=39653.51, stdev=9979.96 00:09:39.960 lat (usec): min=1078, max=45053, avg=39680.26, stdev=9979.89 00:09:39.960 clat percentiles (usec): 00:09:39.960 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[41157], 20.00th=[41681], 00:09:39.960 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:09:39.960 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[44827], 00:09:39.960 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:09:39.960 | 99.99th=[44827] 00:09:39.960 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:09:39.960 slat (nsec): min=9064, max=62881, avg=28583.25, stdev=9352.09 00:09:39.960 clat (usec): min=208, max=875, avg=608.91, stdev=110.88 00:09:39.960 lat (usec): min=218, max=907, avg=637.49, stdev=115.23 00:09:39.960 clat percentiles (usec): 00:09:39.960 | 1.00th=[ 359], 5.00th=[ 420], 10.00th=[ 453], 20.00th=[ 506], 00:09:39.960 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 644], 00:09:39.960 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 766], 00:09:39.960 | 99.00th=[ 807], 99.50th=[ 832], 99.90th=[ 873], 99.95th=[ 873], 00:09:39.960 | 99.99th=[ 873] 00:09:39.960 bw ( KiB/s): min= 4096, max= 4096, per=45.30%, avg=4096.00, stdev= 0.00, samples=1 00:09:39.960 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:39.960 lat (usec) : 250=0.38%, 500=17.77%, 750=72.02%, 1000=6.62% 00:09:39.960 lat (msec) : 2=0.19%, 50=3.02% 00:09:39.960 cpu : usr=1.00%, sys=1.89%, ctx=529, majf=0, minf=1 00:09:39.960 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.960 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.960 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.960 00:09:39.960 Run status group 0 (all jobs): 00:09:39.960 READ: bw=2176KiB/s (2228kB/s), 65.6KiB/s-2046KiB/s (67.1kB/s-2095kB/s), io=2256KiB (2310kB), run=1001-1037msec 00:09:39.960 WRITE: bw=9041KiB/s (9258kB/s), 1975KiB/s-3229KiB/s (2022kB/s-3306kB/s), io=9376KiB (9601kB), run=1001-1037msec 00:09:39.960 00:09:39.960 Disk stats (read/write): 00:09:39.960 nvme0n1: ios=561/512, merge=0/0, ticks=571/249, in_queue=820, util=87.47% 00:09:39.960 nvme0n2: ios=41/512, merge=0/0, ticks=1244/185, in_queue=1429, util=96.84% 00:09:39.960 nvme0n3: ios=36/512, merge=0/0, ticks=1465/247, in_queue=1712, util=97.48% 00:09:39.960 nvme0n4: ios=13/512, merge=0/0, ticks=507/242, in_queue=749, util=89.49% 00:09:39.960 15:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:39.960 [global] 00:09:39.960 thread=1 00:09:39.960 invalidate=1 00:09:39.960 rw=write 00:09:39.960 time_based=1 00:09:39.960 runtime=1 00:09:39.960 ioengine=libaio 00:09:39.960 direct=1 00:09:39.960 bs=4096 00:09:39.960 iodepth=128 00:09:39.960 norandommap=0 00:09:39.960 numjobs=1 00:09:39.960 00:09:39.960 verify_dump=1 00:09:39.960 verify_backlog=512 00:09:39.960 verify_state_save=0 00:09:39.960 do_verify=1 00:09:39.960 verify=crc32c-intel 00:09:39.960 [job0] 00:09:39.960 filename=/dev/nvme0n1 00:09:39.960 [job1] 00:09:39.960 filename=/dev/nvme0n2 00:09:39.960 [job2] 00:09:39.960 filename=/dev/nvme0n3 00:09:39.960 [job3] 00:09:39.960 filename=/dev/nvme0n4 00:09:39.960 Could not set queue depth (nvme0n1) 00:09:39.960 Could not set queue depth (nvme0n2) 00:09:39.960 Could not set queue depth (nvme0n3) 00:09:39.960 Could not set queue depth (nvme0n4) 00:09:40.529 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:40.529 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:40.529 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:40.529 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:40.529 fio-3.35 00:09:40.529 Starting 4 threads 00:09:41.472 00:09:41.472 job0: (groupid=0, jobs=1): err= 0: pid=3635948: Wed Nov 6 15:21:59 2024 00:09:41.472 read: IOPS=7384, BW=28.8MiB/s (30.2MB/s)(29.1MiB/1008msec) 00:09:41.472 slat (nsec): min=980, max=10016k, avg=65931.22, stdev=483479.31 00:09:41.472 clat (usec): min=2370, max=36062, avg=8594.76, stdev=3135.26 00:09:41.472 lat (usec): min=2376, max=36070, avg=8660.69, stdev=3171.93 00:09:41.472 clat percentiles (usec): 00:09:41.472 | 1.00th=[ 4228], 5.00th=[ 5473], 10.00th=[ 5800], 20.00th=[ 6390], 00:09:41.472 | 30.00th=[ 6849], 40.00th=[ 7373], 50.00th=[ 7963], 60.00th=[ 8356], 00:09:41.472 | 70.00th=[ 8848], 80.00th=[10290], 90.00th=[12911], 95.00th=[14877], 00:09:41.472 | 99.00th=[19006], 99.50th=[22152], 99.90th=[29754], 99.95th=[35914], 00:09:41.472 | 99.99th=[35914] 00:09:41.472 write: IOPS=7619, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1008msec); 0 zone resets 00:09:41.472 slat (nsec): min=1571, max=9517.8k, avg=61248.01, stdev=429621.31 00:09:41.472 clat (usec): min=1308, max=55073, avg=8321.81, stdev=6012.91 00:09:41.472 lat (usec): min=1321, max=55076, avg=8383.06, stdev=6040.63 00:09:41.472 clat percentiles (usec): 00:09:41.472 | 1.00th=[ 3130], 5.00th=[ 3949], 10.00th=[ 4293], 20.00th=[ 5276], 00:09:41.472 | 30.00th=[ 6128], 40.00th=[ 6652], 50.00th=[ 7111], 60.00th=[ 7898], 00:09:41.472 | 70.00th=[ 8586], 80.00th=[ 9896], 90.00th=[11863], 95.00th=[14615], 00:09:41.472 | 99.00th=[47449], 99.50th=[53740], 99.90th=[54789], 99.95th=[55313], 00:09:41.472 | 99.99th=[55313] 00:09:41.472 bw ( KiB/s): min=28672, max=32833, per=31.87%, avg=30752.50, stdev=2942.27, samples=2 00:09:41.472 iops : min= 7168, max= 8208, avg=7688.00, stdev=735.39, samples=2 00:09:41.472 lat (msec) : 2=0.06%, 4=3.68%, 10=75.55%, 20=19.33%, 50=1.01% 00:09:41.472 lat (msec) : 100=0.36% 00:09:41.472 cpu : usr=5.26%, sys=8.14%, ctx=583, majf=0, minf=2 00:09:41.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:41.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:41.472 issued rwts: total=7444,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:41.472 job1: (groupid=0, jobs=1): err= 0: pid=3635949: Wed Nov 6 15:21:59 2024 00:09:41.472 read: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec) 00:09:41.472 slat (nsec): min=893, max=11723k, avg=68765.21, stdev=475918.90 00:09:41.472 clat (usec): min=2813, max=26853, avg=9185.57, stdev=3819.10 00:09:41.472 lat (usec): min=2816, max=28142, avg=9254.34, stdev=3853.91 00:09:41.472 clat percentiles (usec): 00:09:41.472 | 1.00th=[ 3458], 5.00th=[ 4883], 10.00th=[ 5866], 20.00th=[ 6259], 00:09:41.472 | 30.00th=[ 7177], 40.00th=[ 7635], 50.00th=[ 8094], 60.00th=[ 8717], 00:09:41.472 | 70.00th=[ 9372], 80.00th=[11731], 90.00th=[15270], 95.00th=[17695], 00:09:41.472 | 99.00th=[21627], 99.50th=[24773], 99.90th=[25035], 99.95th=[25035], 00:09:41.472 | 99.99th=[26870] 00:09:41.472 write: IOPS=7424, BW=29.0MiB/s (30.4MB/s)(29.1MiB/1003msec); 0 zone resets 00:09:41.472 slat (nsec): min=1536, max=11669k, avg=62600.47, stdev=361007.86 00:09:41.472 clat (usec): min=1434, max=19694, avg=8214.70, stdev=2710.65 00:09:41.472 lat (usec): min=1444, max=19702, avg=8277.30, stdev=2733.81 00:09:41.472 clat percentiles (usec): 00:09:41.472 | 1.00th=[ 3392], 5.00th=[ 4817], 10.00th=[ 5669], 20.00th=[ 6194], 00:09:41.472 | 30.00th=[ 7111], 40.00th=[ 7504], 50.00th=[ 7963], 60.00th=[ 8291], 00:09:41.472 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[10683], 95.00th=[14615], 00:09:41.472 | 99.00th=[18482], 99.50th=[19268], 99.90th=[19792], 99.95th=[19792], 00:09:41.472 | 99.99th=[19792] 00:09:41.472 bw ( KiB/s): min=28729, max=29888, per=30.37%, avg=29308.50, stdev=819.54, samples=2 00:09:41.472 iops : min= 7182, max= 7472, avg=7327.00, stdev=205.06, samples=2 00:09:41.472 lat (msec) : 2=0.17%, 4=1.70%, 10=79.62%, 20=17.80%, 50=0.71% 00:09:41.472 cpu : usr=3.19%, sys=7.58%, ctx=670, majf=0, minf=1 00:09:41.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:41.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:41.472 issued rwts: total=7168,7447,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:41.472 job2: (groupid=0, jobs=1): err= 0: pid=3635950: Wed Nov 6 15:21:59 2024 00:09:41.472 read: IOPS=5553, BW=21.7MiB/s (22.7MB/s)(21.9MiB/1008msec) 00:09:41.472 slat (nsec): min=997, max=12430k, avg=80230.95, stdev=592275.51 00:09:41.472 clat (usec): min=3497, max=43690, avg=11404.15, stdev=4244.87 00:09:41.472 lat (usec): min=4098, max=43771, avg=11484.38, stdev=4281.27 00:09:41.472 clat percentiles (usec): 00:09:41.472 | 1.00th=[ 5866], 5.00th=[ 7701], 10.00th=[ 8160], 20.00th=[ 8586], 00:09:41.472 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[11076], 00:09:41.472 | 70.00th=[12518], 80.00th=[13304], 90.00th=[15795], 95.00th=[18220], 00:09:41.472 | 99.00th=[34866], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:09:41.472 | 99.99th=[43779] 00:09:41.472 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:09:41.472 slat (nsec): min=1778, max=13095k, avg=89506.03, stdev=667016.91 00:09:41.472 clat (usec): min=2456, max=37730, avg=11243.72, stdev=4624.81 00:09:41.472 lat (usec): min=2483, max=37735, avg=11333.23, stdev=4684.36 00:09:41.472 clat percentiles (usec): 00:09:41.472 | 1.00th=[ 5669], 5.00th=[ 6521], 10.00th=[ 7701], 20.00th=[ 8455], 00:09:41.472 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10159], 00:09:41.472 | 70.00th=[10814], 80.00th=[13042], 90.00th=[18220], 95.00th=[22938], 00:09:41.472 | 99.00th=[26084], 99.50th=[29230], 99.90th=[30802], 99.95th=[32637], 00:09:41.472 | 99.99th=[37487] 00:09:41.472 bw ( KiB/s): min=20480, max=24576, per=23.34%, avg=22528.00, stdev=2896.31, samples=2 00:09:41.472 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:09:41.472 lat (msec) : 4=0.25%, 10=52.13%, 20=41.82%, 50=5.81% 00:09:41.472 cpu : usr=4.37%, sys=7.05%, ctx=269, majf=0, minf=2 00:09:41.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:41.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:41.472 issued rwts: total=5598,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:41.472 job3: (groupid=0, jobs=1): err= 0: pid=3635951: Wed Nov 6 15:21:59 2024 00:09:41.472 read: IOPS=3487, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1009msec) 00:09:41.472 slat (nsec): min=939, max=53302k, avg=163250.24, stdev=1551245.48 00:09:41.472 clat (usec): min=1146, max=102097, avg=19847.18, stdev=19601.94 00:09:41.472 lat (msec): min=5, max=102, avg=20.01, stdev=19.72 00:09:41.472 clat percentiles (msec): 00:09:41.472 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 9], 00:09:41.472 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 15], 00:09:41.472 | 70.00th=[ 17], 80.00th=[ 21], 90.00th=[ 48], 95.00th=[ 69], 00:09:41.472 | 99.00th=[ 97], 99.50th=[ 103], 99.90th=[ 103], 99.95th=[ 103], 00:09:41.472 | 99.99th=[ 103] 00:09:41.472 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:09:41.472 slat (nsec): min=1671, max=11408k, avg=107794.22, stdev=694770.45 00:09:41.472 clat (usec): min=637, max=92117, avg=16199.93, stdev=14963.84 00:09:41.472 lat (usec): min=646, max=92127, avg=16307.73, stdev=15034.73 00:09:41.472 clat percentiles (usec): 00:09:41.472 | 1.00th=[ 1237], 5.00th=[ 5080], 10.00th=[ 8029], 20.00th=[ 8717], 00:09:41.472 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[11994], 00:09:41.472 | 70.00th=[13304], 80.00th=[20579], 90.00th=[35914], 95.00th=[49021], 00:09:41.472 | 99.00th=[91751], 99.50th=[91751], 99.90th=[91751], 99.95th=[91751], 00:09:41.472 | 99.99th=[91751] 00:09:41.472 bw ( KiB/s): min= 6144, max=22573, per=14.88%, avg=14358.50, stdev=11617.06, samples=2 00:09:41.472 iops : min= 1536, max= 5643, avg=3589.50, stdev=2904.09, samples=2 00:09:41.472 lat (usec) : 750=0.11%, 1000=0.23% 00:09:41.472 lat (msec) : 2=0.68%, 4=0.51%, 10=36.35%, 20=40.88%, 50=15.02% 00:09:41.472 lat (msec) : 100=5.79%, 250=0.44% 00:09:41.472 cpu : usr=2.58%, sys=4.37%, ctx=374, majf=0, minf=1 00:09:41.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:41.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:41.472 issued rwts: total=3519,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:41.472 00:09:41.472 Run status group 0 (all jobs): 00:09:41.472 READ: bw=91.9MiB/s (96.3MB/s), 13.6MiB/s-28.8MiB/s (14.3MB/s-30.2MB/s), io=92.7MiB (97.2MB), run=1003-1009msec 00:09:41.472 WRITE: bw=94.2MiB/s (98.8MB/s), 13.9MiB/s-29.8MiB/s (14.5MB/s-31.2MB/s), io=95.1MiB (99.7MB), run=1003-1009msec 00:09:41.472 00:09:41.472 Disk stats (read/write): 00:09:41.472 nvme0n1: ios=6706/7015, merge=0/0, ticks=52146/49028, in_queue=101174, util=87.88% 00:09:41.472 nvme0n2: ios=5677/5799, merge=0/0, ticks=27801/22929, in_queue=50730, util=88.90% 00:09:41.472 nvme0n3: ios=4658/4651, merge=0/0, ticks=24604/24049, in_queue=48653, util=99.58% 00:09:41.472 nvme0n4: ios=3215/3584, merge=0/0, ticks=22532/27303, in_queue=49835, util=90.86% 00:09:41.473 15:21:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:41.734 [global] 00:09:41.734 thread=1 00:09:41.734 invalidate=1 00:09:41.734 rw=randwrite 00:09:41.734 time_based=1 00:09:41.734 runtime=1 00:09:41.734 ioengine=libaio 00:09:41.734 direct=1 00:09:41.734 bs=4096 00:09:41.734 iodepth=128 00:09:41.734 norandommap=0 00:09:41.734 numjobs=1 00:09:41.734 00:09:41.734 verify_dump=1 00:09:41.734 verify_backlog=512 00:09:41.734 verify_state_save=0 00:09:41.734 do_verify=1 00:09:41.734 verify=crc32c-intel 00:09:41.734 [job0] 00:09:41.734 filename=/dev/nvme0n1 00:09:41.734 [job1] 00:09:41.734 filename=/dev/nvme0n2 00:09:41.734 [job2] 00:09:41.734 filename=/dev/nvme0n3 00:09:41.734 [job3] 00:09:41.734 filename=/dev/nvme0n4 00:09:41.734 Could not set queue depth (nvme0n1) 00:09:41.734 Could not set queue depth (nvme0n2) 00:09:41.734 Could not set queue depth (nvme0n3) 00:09:41.734 Could not set queue depth (nvme0n4) 00:09:41.994 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:41.994 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:41.994 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:41.994 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:41.994 fio-3.35 00:09:41.994 Starting 4 threads 00:09:43.380 00:09:43.380 job0: (groupid=0, jobs=1): err= 0: pid=3636467: Wed Nov 6 15:22:01 2024 00:09:43.380 read: IOPS=6828, BW=26.7MiB/s (28.0MB/s)(26.8MiB/1006msec) 00:09:43.380 slat (nsec): min=995, max=11005k, avg=70632.64, stdev=514850.78 00:09:43.380 clat (usec): min=3302, max=33417, avg=8980.09, stdev=3134.73 00:09:43.380 lat (usec): min=3310, max=33420, avg=9050.72, stdev=3174.06 00:09:43.380 clat percentiles (usec): 00:09:43.380 | 1.00th=[ 4047], 5.00th=[ 5735], 10.00th=[ 6063], 20.00th=[ 6718], 00:09:43.380 | 30.00th=[ 7111], 40.00th=[ 7570], 50.00th=[ 8029], 60.00th=[ 8979], 00:09:43.380 | 70.00th=[10028], 80.00th=[11076], 90.00th=[12911], 95.00th=[13566], 00:09:43.380 | 99.00th=[18220], 99.50th=[27395], 99.90th=[32637], 99.95th=[33424], 00:09:43.380 | 99.99th=[33424] 00:09:43.380 write: IOPS=7125, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1006msec); 0 zone resets 00:09:43.380 slat (nsec): min=1665, max=7403.8k, avg=66573.74, stdev=369244.77 00:09:43.380 clat (usec): min=2022, max=33419, avg=9184.08, stdev=5753.02 00:09:43.380 lat (usec): min=2030, max=33425, avg=9250.65, stdev=5792.24 00:09:43.380 clat percentiles (usec): 00:09:43.380 | 1.00th=[ 2835], 5.00th=[ 4080], 10.00th=[ 4424], 20.00th=[ 5866], 00:09:43.380 | 30.00th=[ 6521], 40.00th=[ 6849], 50.00th=[ 7177], 60.00th=[ 7373], 00:09:43.380 | 70.00th=[ 8291], 80.00th=[12256], 90.00th=[17957], 95.00th=[23725], 00:09:43.380 | 99.00th=[28181], 99.50th=[30540], 99.90th=[30802], 99.95th=[30802], 00:09:43.380 | 99.99th=[33424] 00:09:43.380 bw ( KiB/s): min=27376, max=29968, per=27.05%, avg=28672.00, stdev=1832.82, samples=2 00:09:43.380 iops : min= 6844, max= 7492, avg=7168.00, stdev=458.21, samples=2 00:09:43.380 lat (msec) : 4=2.75%, 10=71.63%, 20=21.00%, 50=4.62% 00:09:43.380 cpu : usr=4.08%, sys=8.26%, ctx=699, majf=0, minf=1 00:09:43.380 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:43.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:43.380 issued rwts: total=6869,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.380 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:43.380 job1: (groupid=0, jobs=1): err= 0: pid=3636468: Wed Nov 6 15:22:01 2024 00:09:43.380 read: IOPS=7626, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1007msec) 00:09:43.380 slat (nsec): min=912, max=7799.4k, avg=62275.67, stdev=446728.87 00:09:43.380 clat (usec): min=3083, max=21630, avg=8190.58, stdev=2277.39 00:09:43.380 lat (usec): min=3090, max=21633, avg=8252.86, stdev=2304.37 00:09:43.380 clat percentiles (usec): 00:09:43.380 | 1.00th=[ 4228], 5.00th=[ 5473], 10.00th=[ 6063], 20.00th=[ 6587], 00:09:43.380 | 30.00th=[ 7046], 40.00th=[ 7504], 50.00th=[ 7963], 60.00th=[ 8160], 00:09:43.380 | 70.00th=[ 8586], 80.00th=[ 9372], 90.00th=[10945], 95.00th=[12518], 00:09:43.380 | 99.00th=[16581], 99.50th=[20055], 99.90th=[21365], 99.95th=[21627], 00:09:43.380 | 99.99th=[21627] 00:09:43.380 write: IOPS=8108, BW=31.7MiB/s (33.2MB/s)(31.9MiB/1007msec); 0 zone resets 00:09:43.380 slat (nsec): min=1573, max=7359.7k, avg=58856.68, stdev=370158.23 00:09:43.380 clat (usec): min=1151, max=21624, avg=7939.06, stdev=3271.97 00:09:43.380 lat (usec): min=1162, max=21626, avg=7997.92, stdev=3294.82 00:09:43.380 clat percentiles (usec): 00:09:43.380 | 1.00th=[ 2966], 5.00th=[ 4113], 10.00th=[ 4359], 20.00th=[ 5276], 00:09:43.380 | 30.00th=[ 6521], 40.00th=[ 6783], 50.00th=[ 7111], 60.00th=[ 7898], 00:09:43.381 | 70.00th=[ 8455], 80.00th=[ 9503], 90.00th=[14222], 95.00th=[15401], 00:09:43.381 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18220], 99.95th=[18220], 00:09:43.381 | 99.99th=[21627] 00:09:43.381 bw ( KiB/s): min=31536, max=32768, per=30.33%, avg=32152.00, stdev=871.16, samples=2 00:09:43.381 iops : min= 7884, max= 8192, avg=8038.00, stdev=217.79, samples=2 00:09:43.381 lat (msec) : 2=0.03%, 4=2.46%, 10=82.18%, 20=15.05%, 50=0.29% 00:09:43.381 cpu : usr=5.47%, sys=7.75%, ctx=629, majf=0, minf=1 00:09:43.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:43.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:43.381 issued rwts: total=7680,8165,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:43.381 job2: (groupid=0, jobs=1): err= 0: pid=3636469: Wed Nov 6 15:22:01 2024 00:09:43.381 read: IOPS=7125, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1006msec) 00:09:43.381 slat (nsec): min=994, max=7409.1k, avg=74398.44, stdev=530861.56 00:09:43.381 clat (usec): min=2952, max=29341, avg=9664.88, stdev=3424.76 00:09:43.381 lat (usec): min=2957, max=29354, avg=9739.28, stdev=3459.66 00:09:43.381 clat percentiles (usec): 00:09:43.381 | 1.00th=[ 4490], 5.00th=[ 6325], 10.00th=[ 6915], 20.00th=[ 7504], 00:09:43.381 | 30.00th=[ 7832], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 9110], 00:09:43.381 | 70.00th=[ 9896], 80.00th=[11469], 90.00th=[14222], 95.00th=[16188], 00:09:43.381 | 99.00th=[23200], 99.50th=[23462], 99.90th=[24773], 99.95th=[27132], 00:09:43.381 | 99.99th=[29230] 00:09:43.381 write: IOPS=7211, BW=28.2MiB/s (29.5MB/s)(28.3MiB/1006msec); 0 zone resets 00:09:43.381 slat (nsec): min=1610, max=8448.7k, avg=57819.18, stdev=342361.87 00:09:43.381 clat (usec): min=1150, max=26852, avg=8035.88, stdev=2959.30 00:09:43.381 lat (usec): min=1161, max=26854, avg=8093.70, stdev=2975.33 00:09:43.381 clat percentiles (usec): 00:09:43.381 | 1.00th=[ 2737], 5.00th=[ 4015], 10.00th=[ 4948], 20.00th=[ 6652], 00:09:43.381 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8094], 00:09:43.381 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 9372], 95.00th=[14353], 00:09:43.381 | 99.00th=[22152], 99.50th=[22414], 99.90th=[22938], 99.95th=[22938], 00:09:43.381 | 99.99th=[26870] 00:09:43.381 bw ( KiB/s): min=24592, max=32752, per=27.05%, avg=28672.00, stdev=5769.99, samples=2 00:09:43.381 iops : min= 6148, max= 8188, avg=7168.00, stdev=1442.50, samples=2 00:09:43.381 lat (msec) : 2=0.01%, 4=2.73%, 10=78.69%, 20=16.36%, 50=2.20% 00:09:43.381 cpu : usr=5.57%, sys=6.27%, ctx=775, majf=0, minf=2 00:09:43.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:43.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:43.381 issued rwts: total=7168,7255,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:43.381 job3: (groupid=0, jobs=1): err= 0: pid=3636470: Wed Nov 6 15:22:01 2024 00:09:43.381 read: IOPS=4024, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1005msec) 00:09:43.381 slat (nsec): min=952, max=15812k, avg=129041.18, stdev=883806.71 00:09:43.381 clat (usec): min=2296, max=61797, avg=15333.24, stdev=9159.40 00:09:43.381 lat (usec): min=5970, max=61823, avg=15462.28, stdev=9247.95 00:09:43.381 clat percentiles (usec): 00:09:43.381 | 1.00th=[ 7308], 5.00th=[ 9110], 10.00th=[ 9241], 20.00th=[ 9634], 00:09:43.381 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11994], 60.00th=[13960], 00:09:43.381 | 70.00th=[14877], 80.00th=[17171], 90.00th=[24511], 95.00th=[36439], 00:09:43.381 | 99.00th=[53740], 99.50th=[53740], 99.90th=[59507], 99.95th=[60556], 00:09:43.381 | 99.99th=[61604] 00:09:43.381 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:09:43.381 slat (nsec): min=1605, max=16301k, avg=108754.85, stdev=787889.00 00:09:43.381 clat (usec): min=4176, max=51301, avg=15923.11, stdev=8554.94 00:09:43.381 lat (usec): min=4183, max=51323, avg=16031.86, stdev=8624.40 00:09:43.381 clat percentiles (usec): 00:09:43.381 | 1.00th=[ 6390], 5.00th=[ 7177], 10.00th=[ 8979], 20.00th=[ 9634], 00:09:43.381 | 30.00th=[10421], 40.00th=[11600], 50.00th=[13173], 60.00th=[15139], 00:09:43.381 | 70.00th=[16909], 80.00th=[20317], 90.00th=[30278], 95.00th=[37487], 00:09:43.381 | 99.00th=[40633], 99.50th=[43254], 99.90th=[46924], 99.95th=[48497], 00:09:43.381 | 99.99th=[51119] 00:09:43.381 bw ( KiB/s): min=14032, max=18736, per=15.46%, avg=16384.00, stdev=3326.23, samples=2 00:09:43.381 iops : min= 3508, max= 4684, avg=4096.00, stdev=831.56, samples=2 00:09:43.381 lat (msec) : 4=0.01%, 10=26.41%, 20=54.35%, 50=17.95%, 100=1.28% 00:09:43.381 cpu : usr=3.09%, sys=4.28%, ctx=312, majf=0, minf=1 00:09:43.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:43.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:43.381 issued rwts: total=4045,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:43.381 00:09:43.381 Run status group 0 (all jobs): 00:09:43.381 READ: bw=99.9MiB/s (105MB/s), 15.7MiB/s-29.8MiB/s (16.5MB/s-31.2MB/s), io=101MiB (106MB), run=1005-1007msec 00:09:43.381 WRITE: bw=104MiB/s (109MB/s), 15.9MiB/s-31.7MiB/s (16.7MB/s-33.2MB/s), io=104MiB (109MB), run=1005-1007msec 00:09:43.381 00:09:43.381 Disk stats (read/write): 00:09:43.381 nvme0n1: ios=5666/5647, merge=0/0, ticks=49078/53371, in_queue=102449, util=98.30% 00:09:43.381 nvme0n2: ios=6407/6656, merge=0/0, ticks=49947/51812, in_queue=101759, util=88.29% 00:09:43.381 nvme0n3: ios=6193/6565, merge=0/0, ticks=51576/49348, in_queue=100924, util=96.95% 00:09:43.381 nvme0n4: ios=3072/3297, merge=0/0, ticks=21306/23445, in_queue=44751, util=89.17% 00:09:43.381 15:22:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:43.381 15:22:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3636740 00:09:43.381 15:22:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:43.381 15:22:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:43.381 [global] 00:09:43.381 thread=1 00:09:43.381 invalidate=1 00:09:43.381 rw=read 00:09:43.381 time_based=1 00:09:43.381 runtime=10 00:09:43.381 ioengine=libaio 00:09:43.381 direct=1 00:09:43.381 bs=4096 00:09:43.381 iodepth=1 00:09:43.381 norandommap=1 00:09:43.381 numjobs=1 00:09:43.381 00:09:43.381 [job0] 00:09:43.381 filename=/dev/nvme0n1 00:09:43.381 [job1] 00:09:43.381 filename=/dev/nvme0n2 00:09:43.381 [job2] 00:09:43.381 filename=/dev/nvme0n3 00:09:43.381 [job3] 00:09:43.381 filename=/dev/nvme0n4 00:09:43.381 Could not set queue depth (nvme0n1) 00:09:43.381 Could not set queue depth (nvme0n2) 00:09:43.381 Could not set queue depth (nvme0n3) 00:09:43.381 Could not set queue depth (nvme0n4) 00:09:43.642 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.642 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.642 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.642 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.642 fio-3.35 00:09:43.642 Starting 4 threads 00:09:46.190 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:46.450 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:46.450 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=262144, buflen=4096 00:09:46.450 fio: pid=3637000, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:46.711 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=8454144, buflen=4096 00:09:46.711 fio: pid=3636999, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:46.711 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:46.711 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:46.711 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=294912, buflen=4096 00:09:46.711 fio: pid=3636997, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:46.711 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:46.711 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:46.972 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=323584, buflen=4096 00:09:46.972 fio: pid=3636998, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:46.972 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:46.972 15:22:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:46.973 00:09:46.973 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3636997: Wed Nov 6 15:22:04 2024 00:09:46.973 read: IOPS=24, BW=96.4KiB/s (98.7kB/s)(288KiB/2988msec) 00:09:46.973 slat (usec): min=10, max=12577, avg=193.05, stdev=1469.64 00:09:46.973 clat (usec): min=910, max=42072, avg=40995.98, stdev=4814.49 00:09:46.973 lat (usec): min=948, max=53897, avg=41191.34, stdev=5046.82 00:09:46.973 clat percentiles (usec): 00:09:46.973 | 1.00th=[ 914], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:46.973 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:09:46.973 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:46.973 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:46.973 | 99.99th=[42206] 00:09:46.973 bw ( KiB/s): min= 96, max= 104, per=3.37%, avg=97.60, stdev= 3.58, samples=5 00:09:46.973 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:09:46.973 lat (usec) : 1000=1.37% 00:09:46.973 lat (msec) : 50=97.26% 00:09:46.973 cpu : usr=0.10%, sys=0.00%, ctx=74, majf=0, minf=1 00:09:46.973 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.973 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.973 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.973 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.973 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3636998: Wed Nov 6 15:22:04 2024 00:09:46.973 read: IOPS=25, BW=99.7KiB/s (102kB/s)(316KiB/3169msec) 00:09:46.973 slat (usec): min=26, max=11726, avg=258.01, stdev=1501.82 00:09:46.973 clat (usec): min=600, max=44030, avg=39568.33, stdev=7789.53 00:09:46.973 lat (usec): min=627, max=52984, avg=39829.27, stdev=7984.98 00:09:46.973 clat percentiles (usec): 00:09:46.973 | 1.00th=[ 603], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:46.973 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:46.973 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:46.973 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:09:46.973 | 99.99th=[43779] 00:09:46.973 bw ( KiB/s): min= 94, max= 112, per=3.44%, avg=99.67, stdev= 6.98, samples=6 00:09:46.973 iops : min= 23, max= 28, avg=24.83, stdev= 1.83, samples=6 00:09:46.973 lat (usec) : 750=2.50%, 1000=1.25% 00:09:46.973 lat (msec) : 50=95.00% 00:09:46.973 cpu : usr=0.00%, sys=0.16%, ctx=82, majf=0, minf=2 00:09:46.973 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.973 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.973 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.973 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.973 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3636999: Wed Nov 6 15:22:04 2024 00:09:46.973 read: IOPS=738, BW=2954KiB/s (3025kB/s)(8256KiB/2795msec) 00:09:46.973 slat (nsec): min=6475, max=64553, avg=25871.93, stdev=4919.71 00:09:46.973 clat (usec): min=328, max=41340, avg=1311.69, stdev=4263.43 00:09:46.973 lat (usec): min=355, max=41367, avg=1337.56, stdev=4263.58 00:09:46.973 clat percentiles (usec): 00:09:46.973 | 1.00th=[ 396], 5.00th=[ 510], 10.00th=[ 570], 20.00th=[ 676], 00:09:46.973 | 30.00th=[ 766], 40.00th=[ 832], 50.00th=[ 914], 60.00th=[ 971], 00:09:46.973 | 70.00th=[ 988], 80.00th=[ 1004], 90.00th=[ 1037], 95.00th=[ 1074], 00:09:46.973 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:46.973 | 99.99th=[41157] 00:09:46.973 bw ( KiB/s): min= 96, max= 4824, per=97.06%, avg=2792.00, stdev=2093.44, samples=5 00:09:46.973 iops : min= 24, max= 1206, avg=698.00, stdev=523.36, samples=5 00:09:46.973 lat (usec) : 500=4.65%, 750=22.62%, 1000=49.69% 00:09:46.973 lat (msec) : 2=21.84%, 50=1.16% 00:09:46.973 cpu : usr=0.72%, sys=2.76%, ctx=2066, majf=0, minf=2 00:09:46.973 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.973 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.973 issued rwts: total=2065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.973 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.973 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3637000: Wed Nov 6 15:22:04 2024 00:09:46.973 read: IOPS=24, BW=97.5KiB/s (99.9kB/s)(256KiB/2625msec) 00:09:46.973 slat (nsec): min=25745, max=37085, avg=26401.05, stdev=1453.20 00:09:46.973 clat (usec): min=849, max=42083, avg=40641.41, stdev=7191.52 00:09:46.973 lat (usec): min=886, max=42109, avg=40667.81, stdev=7190.55 00:09:46.973 clat percentiles (usec): 00:09:46.973 | 1.00th=[ 848], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:09:46.973 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:46.973 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:46.973 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:46.973 | 99.99th=[42206] 00:09:46.973 bw ( KiB/s): min= 96, max= 104, per=3.37%, avg=97.60, stdev= 3.58, samples=5 00:09:46.973 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:09:46.973 lat (usec) : 1000=1.54% 00:09:46.973 lat (msec) : 2=1.54%, 50=95.38% 00:09:46.973 cpu : usr=0.11%, sys=0.00%, ctx=65, majf=0, minf=2 00:09:46.973 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.973 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.973 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.973 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.973 00:09:46.973 Run status group 0 (all jobs): 00:09:46.973 READ: bw=2877KiB/s (2946kB/s), 96.4KiB/s-2954KiB/s (98.7kB/s-3025kB/s), io=9116KiB (9335kB), run=2625-3169msec 00:09:46.973 00:09:46.973 Disk stats (read/write): 00:09:46.973 nvme0n1: ios=69/0, merge=0/0, ticks=2828/0, in_queue=2828, util=94.59% 00:09:46.973 nvme0n2: ios=77/0, merge=0/0, ticks=3044/0, in_queue=3044, util=95.48% 00:09:46.973 nvme0n3: ios=1868/0, merge=0/0, ticks=2446/0, in_queue=2446, util=96.08% 00:09:46.973 nvme0n4: ios=63/0, merge=0/0, ticks=2561/0, in_queue=2561, util=96.43% 00:09:47.234 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:47.234 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:47.234 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:47.234 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:47.494 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:47.494 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:47.755 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:47.755 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:48.016 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:48.016 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3636740 00:09:48.016 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:48.016 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:48.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.016 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:48.016 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:09:48.016 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:48.016 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.016 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:48.016 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.016 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:09:48.016 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:48.016 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:48.016 nvmf hotplug test: fio failed as expected 00:09:48.016 15:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:48.277 rmmod nvme_tcp 00:09:48.277 rmmod nvme_fabrics 00:09:48.277 rmmod nvme_keyring 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3632988 ']' 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3632988 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 3632988 ']' 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 3632988 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3632988 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3632988' 00:09:48.277 killing process with pid 3632988 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 3632988 00:09:48.277 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 3632988 00:09:48.537 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:48.537 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:48.537 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:48.537 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:48.537 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:48.537 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:48.537 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:48.537 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:48.537 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:48.537 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.537 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.537 15:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.449 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:50.449 00:09:50.449 real 0m29.495s 00:09:50.449 user 2m32.857s 00:09:50.449 sys 0m9.450s 00:09:50.449 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:50.449 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.449 ************************************ 00:09:50.449 END TEST nvmf_fio_target 00:09:50.449 ************************************ 00:09:50.449 15:22:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:50.449 15:22:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:50.449 15:22:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:50.449 15:22:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:50.710 ************************************ 00:09:50.710 START TEST nvmf_bdevio 00:09:50.710 ************************************ 00:09:50.710 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:50.710 * Looking for test storage... 00:09:50.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.710 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:50.710 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:09:50.710 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:50.710 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:50.710 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.710 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.710 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.710 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.710 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.710 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.710 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.710 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:50.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.711 --rc genhtml_branch_coverage=1 00:09:50.711 --rc genhtml_function_coverage=1 00:09:50.711 --rc genhtml_legend=1 00:09:50.711 --rc geninfo_all_blocks=1 00:09:50.711 --rc geninfo_unexecuted_blocks=1 00:09:50.711 00:09:50.711 ' 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:50.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.711 --rc genhtml_branch_coverage=1 00:09:50.711 --rc genhtml_function_coverage=1 00:09:50.711 --rc genhtml_legend=1 00:09:50.711 --rc geninfo_all_blocks=1 00:09:50.711 --rc geninfo_unexecuted_blocks=1 00:09:50.711 00:09:50.711 ' 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:50.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.711 --rc genhtml_branch_coverage=1 00:09:50.711 --rc genhtml_function_coverage=1 00:09:50.711 --rc genhtml_legend=1 00:09:50.711 --rc geninfo_all_blocks=1 00:09:50.711 --rc geninfo_unexecuted_blocks=1 00:09:50.711 00:09:50.711 ' 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:50.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.711 --rc genhtml_branch_coverage=1 00:09:50.711 --rc genhtml_function_coverage=1 00:09:50.711 --rc genhtml_legend=1 00:09:50.711 --rc geninfo_all_blocks=1 00:09:50.711 --rc geninfo_unexecuted_blocks=1 00:09:50.711 00:09:50.711 ' 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:50.711 15:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:58.851 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:58.851 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:58.851 Found net devices under 0000:31:00.0: cvl_0_0 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:58.851 Found net devices under 0000:31:00.1: cvl_0_1 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:58.851 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:58.852 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:58.852 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:58.852 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.852 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:58.852 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:58.852 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:58.852 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:58.852 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:58.852 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:58.852 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:58.852 15:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:58.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:09:58.852 00:09:58.852 --- 10.0.0.2 ping statistics --- 00:09:58.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.852 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:58.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:09:58.852 00:09:58.852 --- 10.0.0.1 ping statistics --- 00:09:58.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.852 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3642074 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3642074 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3642074 ']' 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:58.852 15:22:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.852 [2024-11-06 15:22:16.242649] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:09:58.852 [2024-11-06 15:22:16.242701] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.852 [2024-11-06 15:22:16.345285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:58.852 [2024-11-06 15:22:16.395789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.852 [2024-11-06 15:22:16.395838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.852 [2024-11-06 15:22:16.395847] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.852 [2024-11-06 15:22:16.395855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.852 [2024-11-06 15:22:16.395861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.852 [2024-11-06 15:22:16.397945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:58.852 [2024-11-06 15:22:16.398142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:58.852 [2024-11-06 15:22:16.398283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:58.852 [2024-11-06 15:22:16.398284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:59.113 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:59.113 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:09:59.113 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:59.113 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:59.113 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.375 [2024-11-06 15:22:17.117215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.375 Malloc0 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.375 [2024-11-06 15:22:17.199985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:59.375 { 00:09:59.375 "params": { 00:09:59.375 "name": "Nvme$subsystem", 00:09:59.375 "trtype": "$TEST_TRANSPORT", 00:09:59.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:59.375 "adrfam": "ipv4", 00:09:59.375 "trsvcid": "$NVMF_PORT", 00:09:59.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:59.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:59.375 "hdgst": ${hdgst:-false}, 00:09:59.375 "ddgst": ${ddgst:-false} 00:09:59.375 }, 00:09:59.375 "method": "bdev_nvme_attach_controller" 00:09:59.375 } 00:09:59.375 EOF 00:09:59.375 )") 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:59.375 15:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:59.375 "params": { 00:09:59.375 "name": "Nvme1", 00:09:59.375 "trtype": "tcp", 00:09:59.375 "traddr": "10.0.0.2", 00:09:59.375 "adrfam": "ipv4", 00:09:59.375 "trsvcid": "4420", 00:09:59.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:59.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:59.375 "hdgst": false, 00:09:59.375 "ddgst": false 00:09:59.375 }, 00:09:59.375 "method": "bdev_nvme_attach_controller" 00:09:59.375 }' 00:09:59.375 [2024-11-06 15:22:17.267397] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:09:59.375 [2024-11-06 15:22:17.267478] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642426 ] 00:09:59.636 [2024-11-06 15:22:17.364199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:59.636 [2024-11-06 15:22:17.420602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.636 [2024-11-06 15:22:17.420785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.636 [2024-11-06 15:22:17.420787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.896 I/O targets: 00:09:59.896 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:59.896 00:09:59.896 00:09:59.896 CUnit - A unit testing framework for C - Version 2.1-3 00:09:59.896 http://cunit.sourceforge.net/ 00:09:59.896 00:09:59.896 00:09:59.896 Suite: bdevio tests on: Nvme1n1 00:09:59.896 Test: blockdev write read block ...passed 00:09:59.896 Test: blockdev write zeroes read block ...passed 00:09:59.896 Test: blockdev write zeroes read no split ...passed 00:09:59.896 Test: blockdev write zeroes read split ...passed 00:09:59.896 Test: blockdev write zeroes read split partial ...passed 00:09:59.896 Test: blockdev reset ...[2024-11-06 15:22:17.788759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:59.896 [2024-11-06 15:22:17.788834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f91c0 (9): Bad file descriptor 00:09:59.896 [2024-11-06 15:22:17.849910] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:59.896 passed 00:09:59.896 Test: blockdev write read 8 blocks ...passed 00:09:59.896 Test: blockdev write read size > 128k ...passed 00:09:59.896 Test: blockdev write read invalid size ...passed 00:10:00.157 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:00.157 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:00.157 Test: blockdev write read max offset ...passed 00:10:00.157 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:00.157 Test: blockdev writev readv 8 blocks ...passed 00:10:00.157 Test: blockdev writev readv 30 x 1block ...passed 00:10:00.157 Test: blockdev writev readv block ...passed 00:10:00.157 Test: blockdev writev readv size > 128k ...passed 00:10:00.157 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:00.157 Test: blockdev comparev and writev ...[2024-11-06 15:22:18.070177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.157 [2024-11-06 15:22:18.070223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:00.157 [2024-11-06 15:22:18.070240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.157 [2024-11-06 15:22:18.070250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:00.157 [2024-11-06 15:22:18.070643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.157 [2024-11-06 15:22:18.070657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:00.157 [2024-11-06 15:22:18.070672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.157 [2024-11-06 15:22:18.070688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:00.157 [2024-11-06 15:22:18.071126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.157 [2024-11-06 15:22:18.071140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:00.157 [2024-11-06 15:22:18.071154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.157 [2024-11-06 15:22:18.071162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:00.157 [2024-11-06 15:22:18.071580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.157 [2024-11-06 15:22:18.071593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:00.157 [2024-11-06 15:22:18.071607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.157 [2024-11-06 15:22:18.071615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:00.157 passed 00:10:00.418 Test: blockdev nvme passthru rw ...passed 00:10:00.418 Test: blockdev nvme passthru vendor specific ...[2024-11-06 15:22:18.156261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:00.418 [2024-11-06 15:22:18.156279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:00.419 [2024-11-06 15:22:18.156550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:00.419 [2024-11-06 15:22:18.156562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:00.419 [2024-11-06 15:22:18.156783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:00.419 [2024-11-06 15:22:18.156798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:00.419 [2024-11-06 15:22:18.157027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:00.419 [2024-11-06 15:22:18.157039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:00.419 passed 00:10:00.419 Test: blockdev nvme admin passthru ...passed 00:10:00.419 Test: blockdev copy ...passed 00:10:00.419 00:10:00.419 Run Summary: Type Total Ran Passed Failed Inactive 00:10:00.419 suites 1 1 n/a 0 0 00:10:00.419 tests 23 23 23 0 0 00:10:00.419 asserts 152 152 152 0 n/a 00:10:00.419 00:10:00.419 Elapsed time = 1.186 seconds 00:10:00.419 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:00.419 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.419 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.419 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.419 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:00.419 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:00.419 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:00.419 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:00.419 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:00.419 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:00.419 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:00.419 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:00.419 rmmod nvme_tcp 00:10:00.419 rmmod nvme_fabrics 00:10:00.419 rmmod nvme_keyring 00:10:00.419 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:00.419 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:00.419 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:00.419 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3642074 ']' 00:10:00.419 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3642074 00:10:00.419 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3642074 ']' 00:10:00.419 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3642074 00:10:00.679 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:10:00.679 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:00.679 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3642074 00:10:00.679 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:10:00.679 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:10:00.679 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3642074' 00:10:00.679 killing process with pid 3642074 00:10:00.679 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3642074 00:10:00.679 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3642074 00:10:00.679 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:00.679 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:00.679 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:00.679 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:00.679 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:00.679 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:00.679 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:00.679 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:00.679 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:00.679 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.679 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.679 15:22:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:03.224 00:10:03.224 real 0m12.221s 00:10:03.224 user 0m13.080s 00:10:03.224 sys 0m6.235s 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.224 ************************************ 00:10:03.224 END TEST nvmf_bdevio 00:10:03.224 ************************************ 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:03.224 00:10:03.224 real 5m5.216s 00:10:03.224 user 11m44.877s 00:10:03.224 sys 1m52.931s 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:03.224 ************************************ 00:10:03.224 END TEST nvmf_target_core 00:10:03.224 ************************************ 00:10:03.224 15:22:20 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:03.224 15:22:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:03.224 15:22:20 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:03.224 15:22:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:03.224 ************************************ 00:10:03.224 START TEST nvmf_target_extra 00:10:03.224 ************************************ 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:03.224 * Looking for test storage... 00:10:03.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.224 15:22:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:03.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.225 --rc genhtml_branch_coverage=1 00:10:03.225 --rc genhtml_function_coverage=1 00:10:03.225 --rc genhtml_legend=1 00:10:03.225 --rc geninfo_all_blocks=1 00:10:03.225 --rc geninfo_unexecuted_blocks=1 00:10:03.225 00:10:03.225 ' 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:03.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.225 --rc genhtml_branch_coverage=1 00:10:03.225 --rc genhtml_function_coverage=1 00:10:03.225 --rc genhtml_legend=1 00:10:03.225 --rc geninfo_all_blocks=1 00:10:03.225 --rc geninfo_unexecuted_blocks=1 00:10:03.225 00:10:03.225 ' 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:03.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.225 --rc genhtml_branch_coverage=1 00:10:03.225 --rc genhtml_function_coverage=1 00:10:03.225 --rc genhtml_legend=1 00:10:03.225 --rc geninfo_all_blocks=1 00:10:03.225 --rc geninfo_unexecuted_blocks=1 00:10:03.225 00:10:03.225 ' 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:03.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.225 --rc genhtml_branch_coverage=1 00:10:03.225 --rc genhtml_function_coverage=1 00:10:03.225 --rc genhtml_legend=1 00:10:03.225 --rc geninfo_all_blocks=1 00:10:03.225 --rc geninfo_unexecuted_blocks=1 00:10:03.225 00:10:03.225 ' 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:03.225 15:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:03.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:03.225 ************************************ 00:10:03.225 START TEST nvmf_example 00:10:03.225 ************************************ 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:03.225 * Looking for test storage... 00:10:03.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:03.225 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:03.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.487 --rc genhtml_branch_coverage=1 00:10:03.487 --rc genhtml_function_coverage=1 00:10:03.487 --rc genhtml_legend=1 00:10:03.487 --rc geninfo_all_blocks=1 00:10:03.487 --rc geninfo_unexecuted_blocks=1 00:10:03.487 00:10:03.487 ' 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:03.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.487 --rc genhtml_branch_coverage=1 00:10:03.487 --rc genhtml_function_coverage=1 00:10:03.487 --rc genhtml_legend=1 00:10:03.487 --rc geninfo_all_blocks=1 00:10:03.487 --rc geninfo_unexecuted_blocks=1 00:10:03.487 00:10:03.487 ' 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:03.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.487 --rc genhtml_branch_coverage=1 00:10:03.487 --rc genhtml_function_coverage=1 00:10:03.487 --rc genhtml_legend=1 00:10:03.487 --rc geninfo_all_blocks=1 00:10:03.487 --rc geninfo_unexecuted_blocks=1 00:10:03.487 00:10:03.487 ' 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:03.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.487 --rc genhtml_branch_coverage=1 00:10:03.487 --rc genhtml_function_coverage=1 00:10:03.487 --rc genhtml_legend=1 00:10:03.487 --rc geninfo_all_blocks=1 00:10:03.487 --rc geninfo_unexecuted_blocks=1 00:10:03.487 00:10:03.487 ' 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.487 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:03.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:03.488 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:11.628 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.628 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:11.628 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:11.628 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:11.628 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:11.628 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:11.628 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:11.628 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:11.628 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:11.628 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:11.628 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:11.628 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:11.628 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:11.628 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:11.628 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:11.628 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.628 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.628 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:11.629 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:11.629 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:11.629 Found net devices under 0000:31:00.0: cvl_0_0 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:11.629 Found net devices under 0000:31:00.1: cvl_0_1 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:11.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:10:11.629 00:10:11.629 --- 10.0.0.2 ping statistics --- 00:10:11.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.629 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:10:11.629 00:10:11.629 --- 10.0.0.1 ping statistics --- 00:10:11.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.629 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3647025 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3647025 00:10:11.629 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:11.630 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 3647025 ']' 00:10:11.630 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.630 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:11.630 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.630 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:11.630 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:11.889 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:11.889 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:10:11.889 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:11.889 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:11.889 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:11.889 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:11.889 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.889 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:12.149 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.149 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:12.149 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.149 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:12.149 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.149 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:12.149 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:12.149 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.149 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:12.149 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.149 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:12.149 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:12.149 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.149 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:12.149 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.149 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:12.149 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.149 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:12.149 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.149 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:12.149 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:22.149 Initializing NVMe Controllers 00:10:22.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:22.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:22.149 Initialization complete. Launching workers. 00:10:22.149 ======================================================== 00:10:22.149 Latency(us) 00:10:22.149 Device Information : IOPS MiB/s Average min max 00:10:22.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19325.80 75.49 3312.97 612.55 16234.59 00:10:22.149 ======================================================== 00:10:22.149 Total : 19325.80 75.49 3312.97 612.55 16234.59 00:10:22.149 00:10:22.409 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:22.410 rmmod nvme_tcp 00:10:22.410 rmmod nvme_fabrics 00:10:22.410 rmmod nvme_keyring 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3647025 ']' 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3647025 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 3647025 ']' 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 3647025 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3647025 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3647025' 00:10:22.410 killing process with pid 3647025 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 3647025 00:10:22.410 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 3647025 00:10:22.671 nvmf threads initialize successfully 00:10:22.671 bdev subsystem init successfully 00:10:22.671 created a nvmf target service 00:10:22.671 create targets's poll groups done 00:10:22.671 all subsystems of target started 00:10:22.671 nvmf target is running 00:10:22.671 all subsystems of target stopped 00:10:22.671 destroy targets's poll groups done 00:10:22.671 destroyed the nvmf target service 00:10:22.671 bdev subsystem finish successfully 00:10:22.671 nvmf threads destroy successfully 00:10:22.671 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:22.671 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:22.671 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:22.671 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:22.671 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:22.671 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:22.671 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:22.671 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:22.671 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:22.671 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.671 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.671 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.654 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:24.654 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:24.654 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:24.654 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.654 00:10:24.654 real 0m21.468s 00:10:24.654 user 0m46.434s 00:10:24.654 sys 0m7.022s 00:10:24.654 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:24.654 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.654 ************************************ 00:10:24.654 END TEST nvmf_example 00:10:24.654 ************************************ 00:10:24.654 15:22:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:24.654 15:22:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:24.654 15:22:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:24.654 15:22:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:24.654 ************************************ 00:10:24.654 START TEST nvmf_filesystem 00:10:24.654 ************************************ 00:10:24.654 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:24.920 * Looking for test storage... 00:10:24.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:24.920 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:24.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.921 --rc genhtml_branch_coverage=1 00:10:24.921 --rc genhtml_function_coverage=1 00:10:24.921 --rc genhtml_legend=1 00:10:24.921 --rc geninfo_all_blocks=1 00:10:24.921 --rc geninfo_unexecuted_blocks=1 00:10:24.921 00:10:24.921 ' 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:24.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.921 --rc genhtml_branch_coverage=1 00:10:24.921 --rc genhtml_function_coverage=1 00:10:24.921 --rc genhtml_legend=1 00:10:24.921 --rc geninfo_all_blocks=1 00:10:24.921 --rc geninfo_unexecuted_blocks=1 00:10:24.921 00:10:24.921 ' 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:24.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.921 --rc genhtml_branch_coverage=1 00:10:24.921 --rc genhtml_function_coverage=1 00:10:24.921 --rc genhtml_legend=1 00:10:24.921 --rc geninfo_all_blocks=1 00:10:24.921 --rc geninfo_unexecuted_blocks=1 00:10:24.921 00:10:24.921 ' 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:24.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.921 --rc genhtml_branch_coverage=1 00:10:24.921 --rc genhtml_function_coverage=1 00:10:24.921 --rc genhtml_legend=1 00:10:24.921 --rc geninfo_all_blocks=1 00:10:24.921 --rc geninfo_unexecuted_blocks=1 00:10:24.921 00:10:24.921 ' 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:24.921 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:24.922 #define SPDK_CONFIG_H 00:10:24.922 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:24.922 #define SPDK_CONFIG_APPS 1 00:10:24.922 #define SPDK_CONFIG_ARCH native 00:10:24.922 #undef SPDK_CONFIG_ASAN 00:10:24.922 #undef SPDK_CONFIG_AVAHI 00:10:24.922 #undef SPDK_CONFIG_CET 00:10:24.922 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:24.922 #define SPDK_CONFIG_COVERAGE 1 00:10:24.922 #define SPDK_CONFIG_CROSS_PREFIX 00:10:24.922 #undef SPDK_CONFIG_CRYPTO 00:10:24.922 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:24.922 #undef SPDK_CONFIG_CUSTOMOCF 00:10:24.922 #undef SPDK_CONFIG_DAOS 00:10:24.922 #define SPDK_CONFIG_DAOS_DIR 00:10:24.922 #define SPDK_CONFIG_DEBUG 1 00:10:24.922 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:24.922 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:24.922 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:24.922 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:24.922 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:24.922 #undef SPDK_CONFIG_DPDK_UADK 00:10:24.922 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:24.922 #define SPDK_CONFIG_EXAMPLES 1 00:10:24.922 #undef SPDK_CONFIG_FC 00:10:24.922 #define SPDK_CONFIG_FC_PATH 00:10:24.922 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:24.922 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:24.922 #define SPDK_CONFIG_FSDEV 1 00:10:24.922 #undef SPDK_CONFIG_FUSE 00:10:24.922 #undef SPDK_CONFIG_FUZZER 00:10:24.922 #define SPDK_CONFIG_FUZZER_LIB 00:10:24.922 #undef SPDK_CONFIG_GOLANG 00:10:24.922 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:24.922 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:24.922 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:24.922 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:24.922 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:24.922 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:24.922 #undef SPDK_CONFIG_HAVE_LZ4 00:10:24.922 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:24.922 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:24.922 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:24.922 #define SPDK_CONFIG_IDXD 1 00:10:24.922 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:24.922 #undef SPDK_CONFIG_IPSEC_MB 00:10:24.922 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:24.922 #define SPDK_CONFIG_ISAL 1 00:10:24.922 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:24.922 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:24.922 #define SPDK_CONFIG_LIBDIR 00:10:24.922 #undef SPDK_CONFIG_LTO 00:10:24.922 #define SPDK_CONFIG_MAX_LCORES 128 00:10:24.922 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:24.922 #define SPDK_CONFIG_NVME_CUSE 1 00:10:24.922 #undef SPDK_CONFIG_OCF 00:10:24.922 #define SPDK_CONFIG_OCF_PATH 00:10:24.922 #define SPDK_CONFIG_OPENSSL_PATH 00:10:24.922 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:24.922 #define SPDK_CONFIG_PGO_DIR 00:10:24.922 #undef SPDK_CONFIG_PGO_USE 00:10:24.922 #define SPDK_CONFIG_PREFIX /usr/local 00:10:24.922 #undef SPDK_CONFIG_RAID5F 00:10:24.922 #undef SPDK_CONFIG_RBD 00:10:24.922 #define SPDK_CONFIG_RDMA 1 00:10:24.922 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:24.922 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:24.922 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:24.922 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:24.922 #define SPDK_CONFIG_SHARED 1 00:10:24.922 #undef SPDK_CONFIG_SMA 00:10:24.922 #define SPDK_CONFIG_TESTS 1 00:10:24.922 #undef SPDK_CONFIG_TSAN 00:10:24.922 #define SPDK_CONFIG_UBLK 1 00:10:24.922 #define SPDK_CONFIG_UBSAN 1 00:10:24.922 #undef SPDK_CONFIG_UNIT_TESTS 00:10:24.922 #undef SPDK_CONFIG_URING 00:10:24.922 #define SPDK_CONFIG_URING_PATH 00:10:24.922 #undef SPDK_CONFIG_URING_ZNS 00:10:24.922 #undef SPDK_CONFIG_USDT 00:10:24.922 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:24.922 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:24.922 #define SPDK_CONFIG_VFIO_USER 1 00:10:24.922 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:24.922 #define SPDK_CONFIG_VHOST 1 00:10:24.922 #define SPDK_CONFIG_VIRTIO 1 00:10:24.922 #undef SPDK_CONFIG_VTUNE 00:10:24.922 #define SPDK_CONFIG_VTUNE_DIR 00:10:24.922 #define SPDK_CONFIG_WERROR 1 00:10:24.922 #define SPDK_CONFIG_WPDK_DIR 00:10:24.922 #undef SPDK_CONFIG_XNVME 00:10:24.922 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.922 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:24.923 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:24.924 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:25.187 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:25.187 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3649803 ]] 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3649803 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.ZzdCmd 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ZzdCmd/tests/target /tmp/spdk.ZzdCmd 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=434749440 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4849680384 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=123020734464 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356517376 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6335782912 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668225536 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678256640 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:25.188 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847934976 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871306752 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23371776 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=387072 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=116736 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677720064 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678260736 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=540672 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:25.189 * Looking for test storage... 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=123020734464 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8550375424 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:25.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:25.189 15:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:25.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.189 --rc genhtml_branch_coverage=1 00:10:25.189 --rc genhtml_function_coverage=1 00:10:25.189 --rc genhtml_legend=1 00:10:25.189 --rc geninfo_all_blocks=1 00:10:25.189 --rc geninfo_unexecuted_blocks=1 00:10:25.189 00:10:25.189 ' 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:25.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.189 --rc genhtml_branch_coverage=1 00:10:25.189 --rc genhtml_function_coverage=1 00:10:25.189 --rc genhtml_legend=1 00:10:25.189 --rc geninfo_all_blocks=1 00:10:25.189 --rc geninfo_unexecuted_blocks=1 00:10:25.189 00:10:25.189 ' 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:25.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.189 --rc genhtml_branch_coverage=1 00:10:25.189 --rc genhtml_function_coverage=1 00:10:25.189 --rc genhtml_legend=1 00:10:25.189 --rc geninfo_all_blocks=1 00:10:25.189 --rc geninfo_unexecuted_blocks=1 00:10:25.189 00:10:25.189 ' 00:10:25.189 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:25.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.189 --rc genhtml_branch_coverage=1 00:10:25.189 --rc genhtml_function_coverage=1 00:10:25.189 --rc genhtml_legend=1 00:10:25.189 --rc geninfo_all_blocks=1 00:10:25.189 --rc geninfo_unexecuted_blocks=1 00:10:25.189 00:10:25.189 ' 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:25.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:25.190 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:33.329 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:33.329 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:33.330 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:33.330 Found net devices under 0000:31:00.0: cvl_0_0 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:33.330 Found net devices under 0000:31:00.1: cvl_0_1 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:33.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:33.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:10:33.330 00:10:33.330 --- 10.0.0.2 ping statistics --- 00:10:33.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.330 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:33.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:33.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:10:33.330 00:10:33.330 --- 10.0.0.1 ping statistics --- 00:10:33.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.330 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.330 ************************************ 00:10:33.330 START TEST nvmf_filesystem_no_in_capsule 00:10:33.330 ************************************ 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3653651 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3653651 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3653651 ']' 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:33.330 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.330 [2024-11-06 15:22:50.854998] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:10:33.330 [2024-11-06 15:22:50.855061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.330 [2024-11-06 15:22:50.955760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.330 [2024-11-06 15:22:51.008633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.330 [2024-11-06 15:22:51.008685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.331 [2024-11-06 15:22:51.008694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.331 [2024-11-06 15:22:51.008701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.331 [2024-11-06 15:22:51.008707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.331 [2024-11-06 15:22:51.010816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.331 [2024-11-06 15:22:51.011041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.331 [2024-11-06 15:22:51.010880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.331 [2024-11-06 15:22:51.011041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.901 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:33.901 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:33.901 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:33.901 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:33.901 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.901 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.902 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:33.902 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:33.902 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.902 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.902 [2024-11-06 15:22:51.736773] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.902 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.902 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:33.902 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.902 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.902 Malloc1 00:10:33.902 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.902 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:33.902 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.902 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.162 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.162 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:34.162 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.162 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.162 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.162 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:34.162 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.162 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.162 [2024-11-06 15:22:51.905943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:34.162 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.162 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:34.162 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:34.162 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:34.162 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:34.162 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:34.162 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:34.162 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.162 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.162 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.162 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:34.162 { 00:10:34.162 "name": "Malloc1", 00:10:34.162 "aliases": [ 00:10:34.162 "9087ac02-2199-484c-a123-34bc372bab85" 00:10:34.162 ], 00:10:34.162 "product_name": "Malloc disk", 00:10:34.162 "block_size": 512, 00:10:34.162 "num_blocks": 1048576, 00:10:34.162 "uuid": "9087ac02-2199-484c-a123-34bc372bab85", 00:10:34.162 "assigned_rate_limits": { 00:10:34.163 "rw_ios_per_sec": 0, 00:10:34.163 "rw_mbytes_per_sec": 0, 00:10:34.163 "r_mbytes_per_sec": 0, 00:10:34.163 "w_mbytes_per_sec": 0 00:10:34.163 }, 00:10:34.163 "claimed": true, 00:10:34.163 "claim_type": "exclusive_write", 00:10:34.163 "zoned": false, 00:10:34.163 "supported_io_types": { 00:10:34.163 "read": true, 00:10:34.163 "write": true, 00:10:34.163 "unmap": true, 00:10:34.163 "flush": true, 00:10:34.163 "reset": true, 00:10:34.163 "nvme_admin": false, 00:10:34.163 "nvme_io": false, 00:10:34.163 "nvme_io_md": false, 00:10:34.163 "write_zeroes": true, 00:10:34.163 "zcopy": true, 00:10:34.163 "get_zone_info": false, 00:10:34.163 "zone_management": false, 00:10:34.163 "zone_append": false, 00:10:34.163 "compare": false, 00:10:34.163 "compare_and_write": false, 00:10:34.163 "abort": true, 00:10:34.163 "seek_hole": false, 00:10:34.163 "seek_data": false, 00:10:34.163 "copy": true, 00:10:34.163 "nvme_iov_md": false 00:10:34.163 }, 00:10:34.163 "memory_domains": [ 00:10:34.163 { 00:10:34.163 "dma_device_id": "system", 00:10:34.163 "dma_device_type": 1 00:10:34.163 }, 00:10:34.163 { 00:10:34.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.163 "dma_device_type": 2 00:10:34.163 } 00:10:34.163 ], 00:10:34.163 "driver_specific": {} 00:10:34.163 } 00:10:34.163 ]' 00:10:34.163 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:34.163 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:34.163 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:34.163 15:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:34.163 15:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:34.163 15:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:34.163 15:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:34.163 15:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:36.074 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:36.074 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:36.074 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:36.074 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:36.074 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:37.986 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:37.986 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:37.986 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:37.986 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:37.986 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:37.986 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:37.986 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:37.986 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:37.986 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:37.986 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:37.986 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:37.986 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:37.986 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:37.986 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:37.986 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:37.986 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:37.986 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:38.247 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:38.818 15:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:40.201 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:40.201 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:40.201 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:40.201 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:40.201 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.201 ************************************ 00:10:40.201 START TEST filesystem_ext4 00:10:40.201 ************************************ 00:10:40.201 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:40.201 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:40.201 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:40.201 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:40.201 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:40.201 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:40.201 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:40.201 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:40.201 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:40.201 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:40.201 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:40.201 mke2fs 1.47.0 (5-Feb-2023) 00:10:40.201 Discarding device blocks: 0/522240 done 00:10:40.201 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:40.201 Filesystem UUID: 6928f1e8-603b-4339-a54f-8bb322773352 00:10:40.201 Superblock backups stored on blocks: 00:10:40.201 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:40.201 00:10:40.201 Allocating group tables: 0/64 done 00:10:40.201 Writing inode tables: 0/64 done 00:10:42.743 Creating journal (8192 blocks): done 00:10:45.066 Writing superblocks and filesystem accounting information: 0/64 done 00:10:45.066 00:10:45.066 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:10:45.066 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:51.644 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:51.644 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:51.644 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:51.644 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:51.644 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:51.644 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:51.644 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3653651 00:10:51.644 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:51.644 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:51.644 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:51.644 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:51.644 00:10:51.644 real 0m10.773s 00:10:51.644 user 0m0.037s 00:10:51.644 sys 0m0.077s 00:10:51.644 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:51.644 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:51.644 ************************************ 00:10:51.644 END TEST filesystem_ext4 00:10:51.644 ************************************ 00:10:51.644 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:51.644 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:51.644 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:51.644 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.644 ************************************ 00:10:51.644 START TEST filesystem_btrfs 00:10:51.644 ************************************ 00:10:51.644 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:51.644 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:51.645 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:51.645 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:51.645 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:10:51.645 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:51.645 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:10:51.645 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:10:51.645 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:10:51.645 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:10:51.645 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:51.645 btrfs-progs v6.8.1 00:10:51.645 See https://btrfs.readthedocs.io for more information. 00:10:51.645 00:10:51.645 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:51.645 NOTE: several default settings have changed in version 5.15, please make sure 00:10:51.645 this does not affect your deployments: 00:10:51.645 - DUP for metadata (-m dup) 00:10:51.645 - enabled no-holes (-O no-holes) 00:10:51.645 - enabled free-space-tree (-R free-space-tree) 00:10:51.645 00:10:51.645 Label: (null) 00:10:51.645 UUID: 43d15346-4089-4d9b-af5d-1f52f6cdbf13 00:10:51.645 Node size: 16384 00:10:51.645 Sector size: 4096 (CPU page size: 4096) 00:10:51.645 Filesystem size: 510.00MiB 00:10:51.645 Block group profiles: 00:10:51.645 Data: single 8.00MiB 00:10:51.645 Metadata: DUP 32.00MiB 00:10:51.645 System: DUP 8.00MiB 00:10:51.645 SSD detected: yes 00:10:51.645 Zoned device: no 00:10:51.645 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:51.645 Checksum: crc32c 00:10:51.645 Number of devices: 1 00:10:51.645 Devices: 00:10:51.645 ID SIZE PATH 00:10:51.645 1 510.00MiB /dev/nvme0n1p1 00:10:51.645 00:10:51.645 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:10:51.645 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:52.214 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:52.214 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:52.214 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:52.214 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:52.214 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:52.214 15:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3653651 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:52.214 00:10:52.214 real 0m1.362s 00:10:52.214 user 0m0.036s 00:10:52.214 sys 0m0.114s 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:52.214 ************************************ 00:10:52.214 END TEST filesystem_btrfs 00:10:52.214 ************************************ 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.214 ************************************ 00:10:52.214 START TEST filesystem_xfs 00:10:52.214 ************************************ 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:10:52.214 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:52.214 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:52.214 = sectsz=512 attr=2, projid32bit=1 00:10:52.214 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:52.214 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:52.214 data = bsize=4096 blocks=130560, imaxpct=25 00:10:52.214 = sunit=0 swidth=0 blks 00:10:52.214 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:52.214 log =internal log bsize=4096 blocks=16384, version=2 00:10:52.214 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:52.214 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:53.154 Discarding blocks...Done. 00:10:53.154 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:10:53.154 15:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:55.066 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:55.066 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:55.066 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:55.066 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:55.066 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:55.066 15:23:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:55.066 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3653651 00:10:55.066 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:55.066 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:55.066 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:55.066 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:55.066 00:10:55.066 real 0m2.927s 00:10:55.066 user 0m0.025s 00:10:55.066 sys 0m0.081s 00:10:55.066 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:55.066 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:55.066 ************************************ 00:10:55.066 END TEST filesystem_xfs 00:10:55.066 ************************************ 00:10:55.326 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:55.586 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:55.586 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:55.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.586 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:55.586 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:10:55.586 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:55.586 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.586 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:55.586 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.586 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:10:55.586 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:55.586 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.586 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.586 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.586 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:55.586 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3653651 00:10:55.586 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3653651 ']' 00:10:55.586 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3653651 00:10:55.586 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:10:55.586 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:55.586 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3653651 00:10:55.846 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:55.846 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:55.846 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3653651' 00:10:55.846 killing process with pid 3653651 00:10:55.846 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 3653651 00:10:55.846 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 3653651 00:10:55.846 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:55.846 00:10:55.846 real 0m22.997s 00:10:55.846 user 1m30.925s 00:10:55.846 sys 0m1.529s 00:10:55.846 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:55.846 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.846 ************************************ 00:10:55.846 END TEST nvmf_filesystem_no_in_capsule 00:10:55.846 ************************************ 00:10:55.846 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:55.846 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:55.846 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:55.846 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:56.107 ************************************ 00:10:56.107 START TEST nvmf_filesystem_in_capsule 00:10:56.107 ************************************ 00:10:56.107 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:10:56.107 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:56.107 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:56.107 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:56.107 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:56.107 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.107 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3658261 00:10:56.107 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3658261 00:10:56.107 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.107 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3658261 ']' 00:10:56.107 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.107 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:56.107 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.107 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:56.107 15:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.107 [2024-11-06 15:23:13.927178] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:10:56.107 [2024-11-06 15:23:13.927229] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.107 [2024-11-06 15:23:14.021243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.107 [2024-11-06 15:23:14.054860] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.107 [2024-11-06 15:23:14.054893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.107 [2024-11-06 15:23:14.054899] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.107 [2024-11-06 15:23:14.054904] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.107 [2024-11-06 15:23:14.054908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.107 [2024-11-06 15:23:14.056389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.107 [2024-11-06 15:23:14.056539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.107 [2024-11-06 15:23:14.056689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.107 [2024-11-06 15:23:14.056691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.048 [2024-11-06 15:23:14.784278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.048 Malloc1 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.048 [2024-11-06 15:23:14.913185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:57.048 { 00:10:57.048 "name": "Malloc1", 00:10:57.048 "aliases": [ 00:10:57.048 "e1c6bad7-dbe5-4513-8c72-c19c0a8c0753" 00:10:57.048 ], 00:10:57.048 "product_name": "Malloc disk", 00:10:57.048 "block_size": 512, 00:10:57.048 "num_blocks": 1048576, 00:10:57.048 "uuid": "e1c6bad7-dbe5-4513-8c72-c19c0a8c0753", 00:10:57.048 "assigned_rate_limits": { 00:10:57.048 "rw_ios_per_sec": 0, 00:10:57.048 "rw_mbytes_per_sec": 0, 00:10:57.048 "r_mbytes_per_sec": 0, 00:10:57.048 "w_mbytes_per_sec": 0 00:10:57.048 }, 00:10:57.048 "claimed": true, 00:10:57.048 "claim_type": "exclusive_write", 00:10:57.048 "zoned": false, 00:10:57.048 "supported_io_types": { 00:10:57.048 "read": true, 00:10:57.048 "write": true, 00:10:57.048 "unmap": true, 00:10:57.048 "flush": true, 00:10:57.048 "reset": true, 00:10:57.048 "nvme_admin": false, 00:10:57.048 "nvme_io": false, 00:10:57.048 "nvme_io_md": false, 00:10:57.048 "write_zeroes": true, 00:10:57.048 "zcopy": true, 00:10:57.048 "get_zone_info": false, 00:10:57.048 "zone_management": false, 00:10:57.048 "zone_append": false, 00:10:57.048 "compare": false, 00:10:57.048 "compare_and_write": false, 00:10:57.048 "abort": true, 00:10:57.048 "seek_hole": false, 00:10:57.048 "seek_data": false, 00:10:57.048 "copy": true, 00:10:57.048 "nvme_iov_md": false 00:10:57.048 }, 00:10:57.048 "memory_domains": [ 00:10:57.048 { 00:10:57.048 "dma_device_id": "system", 00:10:57.048 "dma_device_type": 1 00:10:57.048 }, 00:10:57.048 { 00:10:57.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.048 "dma_device_type": 2 00:10:57.048 } 00:10:57.048 ], 00:10:57.048 "driver_specific": {} 00:10:57.048 } 00:10:57.048 ]' 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:57.048 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:57.312 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:57.312 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:57.312 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:57.312 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:57.312 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:58.695 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:58.695 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:58.695 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:58.695 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:58.695 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:01.240 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:01.240 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:01.240 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:01.240 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:01.240 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:01.240 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:01.240 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:01.240 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:01.240 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:01.240 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:01.240 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:01.240 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:01.240 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:01.240 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:01.240 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:01.240 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:01.240 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:01.240 15:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:01.501 15:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:02.442 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:02.442 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:02.442 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:02.442 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:02.442 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.442 ************************************ 00:11:02.442 START TEST filesystem_in_capsule_ext4 00:11:02.442 ************************************ 00:11:02.442 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:02.442 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:02.442 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:02.442 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:02.442 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:02.442 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:02.442 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:02.442 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:02.442 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:02.442 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:02.442 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:02.442 mke2fs 1.47.0 (5-Feb-2023) 00:11:02.442 Discarding device blocks: 0/522240 done 00:11:02.442 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:02.442 Filesystem UUID: 8d125787-cf94-4f25-b3ef-4ec26cb13600 00:11:02.442 Superblock backups stored on blocks: 00:11:02.442 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:02.442 00:11:02.442 Allocating group tables: 0/64 done 00:11:02.703 Writing inode tables: 0/64 done 00:11:02.703 Creating journal (8192 blocks): done 00:11:03.644 Writing superblocks and filesystem accounting information: 0/64 done 00:11:03.644 00:11:03.644 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:03.644 15:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3658261 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:10.228 00:11:10.228 real 0m6.814s 00:11:10.228 user 0m0.025s 00:11:10.228 sys 0m0.075s 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:10.228 ************************************ 00:11:10.228 END TEST filesystem_in_capsule_ext4 00:11:10.228 ************************************ 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.228 ************************************ 00:11:10.228 START TEST filesystem_in_capsule_btrfs 00:11:10.228 ************************************ 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:10.228 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:10.228 btrfs-progs v6.8.1 00:11:10.228 See https://btrfs.readthedocs.io for more information. 00:11:10.228 00:11:10.228 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:10.228 NOTE: several default settings have changed in version 5.15, please make sure 00:11:10.228 this does not affect your deployments: 00:11:10.228 - DUP for metadata (-m dup) 00:11:10.228 - enabled no-holes (-O no-holes) 00:11:10.228 - enabled free-space-tree (-R free-space-tree) 00:11:10.229 00:11:10.229 Label: (null) 00:11:10.229 UUID: 06c3ac59-a166-46a3-95ca-faf9de453ec6 00:11:10.229 Node size: 16384 00:11:10.229 Sector size: 4096 (CPU page size: 4096) 00:11:10.229 Filesystem size: 510.00MiB 00:11:10.229 Block group profiles: 00:11:10.229 Data: single 8.00MiB 00:11:10.229 Metadata: DUP 32.00MiB 00:11:10.229 System: DUP 8.00MiB 00:11:10.229 SSD detected: yes 00:11:10.229 Zoned device: no 00:11:10.229 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:10.229 Checksum: crc32c 00:11:10.229 Number of devices: 1 00:11:10.229 Devices: 00:11:10.229 ID SIZE PATH 00:11:10.229 1 510.00MiB /dev/nvme0n1p1 00:11:10.229 00:11:10.229 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:10.229 15:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3658261 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:10.799 00:11:10.799 real 0m1.319s 00:11:10.799 user 0m0.036s 00:11:10.799 sys 0m0.115s 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:10.799 ************************************ 00:11:10.799 END TEST filesystem_in_capsule_btrfs 00:11:10.799 ************************************ 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.799 ************************************ 00:11:10.799 START TEST filesystem_in_capsule_xfs 00:11:10.799 ************************************ 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:11:10.799 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:10.800 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:10.800 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:10.800 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:10.800 = sectsz=512 attr=2, projid32bit=1 00:11:10.800 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:10.800 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:10.800 data = bsize=4096 blocks=130560, imaxpct=25 00:11:10.800 = sunit=0 swidth=0 blks 00:11:10.800 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:10.800 log =internal log bsize=4096 blocks=16384, version=2 00:11:10.800 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:10.800 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:11.741 Discarding blocks...Done. 00:11:11.741 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:11.741 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:13.651 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:13.651 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:13.651 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:13.651 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:13.651 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:13.651 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:13.651 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3658261 00:11:13.651 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:13.651 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:13.651 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:13.651 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:13.651 00:11:13.651 real 0m2.606s 00:11:13.651 user 0m0.027s 00:11:13.651 sys 0m0.078s 00:11:13.651 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:13.651 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:13.651 ************************************ 00:11:13.651 END TEST filesystem_in_capsule_xfs 00:11:13.651 ************************************ 00:11:13.651 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:13.651 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:14.221 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:14.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.221 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:14.221 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:14.221 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:14.221 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.221 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:14.221 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.221 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:14.221 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.221 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.221 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.221 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.221 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:14.221 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3658261 00:11:14.221 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3658261 ']' 00:11:14.221 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3658261 00:11:14.221 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:14.221 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:14.221 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3658261 00:11:14.481 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:14.481 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:14.481 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3658261' 00:11:14.481 killing process with pid 3658261 00:11:14.481 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 3658261 00:11:14.481 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 3658261 00:11:14.481 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:14.481 00:11:14.481 real 0m18.554s 00:11:14.481 user 1m13.392s 00:11:14.481 sys 0m1.410s 00:11:14.481 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:14.481 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.481 ************************************ 00:11:14.481 END TEST nvmf_filesystem_in_capsule 00:11:14.481 ************************************ 00:11:14.481 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:14.481 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:14.481 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:14.741 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:14.741 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:14.741 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:14.741 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:14.741 rmmod nvme_tcp 00:11:14.741 rmmod nvme_fabrics 00:11:14.741 rmmod nvme_keyring 00:11:14.741 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:14.741 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:14.741 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:14.741 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:14.741 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:14.741 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:14.741 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:14.741 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:14.741 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:14.741 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:14.741 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:14.741 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:14.741 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:14.741 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.741 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.741 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.653 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:16.653 00:11:16.653 real 0m52.003s 00:11:16.653 user 2m46.779s 00:11:16.653 sys 0m8.857s 00:11:16.653 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:16.653 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:16.653 ************************************ 00:11:16.653 END TEST nvmf_filesystem 00:11:16.653 ************************************ 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:16.915 ************************************ 00:11:16.915 START TEST nvmf_target_discovery 00:11:16.915 ************************************ 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:16.915 * Looking for test storage... 00:11:16.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:16.915 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:16.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.916 --rc genhtml_branch_coverage=1 00:11:16.916 --rc genhtml_function_coverage=1 00:11:16.916 --rc genhtml_legend=1 00:11:16.916 --rc geninfo_all_blocks=1 00:11:16.916 --rc geninfo_unexecuted_blocks=1 00:11:16.916 00:11:16.916 ' 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:16.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.916 --rc genhtml_branch_coverage=1 00:11:16.916 --rc genhtml_function_coverage=1 00:11:16.916 --rc genhtml_legend=1 00:11:16.916 --rc geninfo_all_blocks=1 00:11:16.916 --rc geninfo_unexecuted_blocks=1 00:11:16.916 00:11:16.916 ' 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:16.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.916 --rc genhtml_branch_coverage=1 00:11:16.916 --rc genhtml_function_coverage=1 00:11:16.916 --rc genhtml_legend=1 00:11:16.916 --rc geninfo_all_blocks=1 00:11:16.916 --rc geninfo_unexecuted_blocks=1 00:11:16.916 00:11:16.916 ' 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:16.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.916 --rc genhtml_branch_coverage=1 00:11:16.916 --rc genhtml_function_coverage=1 00:11:16.916 --rc genhtml_legend=1 00:11:16.916 --rc geninfo_all_blocks=1 00:11:16.916 --rc geninfo_unexecuted_blocks=1 00:11:16.916 00:11:16.916 ' 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.916 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.177 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:17.177 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:17.177 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.177 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.177 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.177 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.177 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.177 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.177 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.177 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.177 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:17.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:17.178 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:25.319 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:25.319 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:25.319 Found net devices under 0000:31:00.0: cvl_0_0 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.319 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:25.320 Found net devices under 0000:31:00.1: cvl_0_1 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:25.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:11:25.320 00:11:25.320 --- 10.0.0.2 ping statistics --- 00:11:25.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.320 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:25.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:11:25.320 00:11:25.320 --- 10.0.0.1 ping statistics --- 00:11:25.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.320 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3666445 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3666445 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 3666445 ']' 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:25.320 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.320 [2024-11-06 15:23:42.531889] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:11:25.320 [2024-11-06 15:23:42.531953] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.320 [2024-11-06 15:23:42.634893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.320 [2024-11-06 15:23:42.688668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.320 [2024-11-06 15:23:42.688725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.320 [2024-11-06 15:23:42.688734] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.320 [2024-11-06 15:23:42.688741] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.320 [2024-11-06 15:23:42.688755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.320 [2024-11-06 15:23:42.691184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.320 [2024-11-06 15:23:42.691346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.320 [2024-11-06 15:23:42.691507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.320 [2024-11-06 15:23:42.691507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.582 [2024-11-06 15:23:43.409696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.582 Null1 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.582 [2024-11-06 15:23:43.481052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.582 Null2 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.582 Null3 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.582 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.843 Null4 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.843 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 4420 00:11:26.105 00:11:26.105 Discovery Log Number of Records 6, Generation counter 6 00:11:26.105 =====Discovery Log Entry 0====== 00:11:26.105 trtype: tcp 00:11:26.105 adrfam: ipv4 00:11:26.105 subtype: current discovery subsystem 00:11:26.105 treq: not required 00:11:26.105 portid: 0 00:11:26.105 trsvcid: 4420 00:11:26.105 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:26.105 traddr: 10.0.0.2 00:11:26.105 eflags: explicit discovery connections, duplicate discovery information 00:11:26.105 sectype: none 00:11:26.105 =====Discovery Log Entry 1====== 00:11:26.105 trtype: tcp 00:11:26.105 adrfam: ipv4 00:11:26.105 subtype: nvme subsystem 00:11:26.105 treq: not required 00:11:26.105 portid: 0 00:11:26.105 trsvcid: 4420 00:11:26.105 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:26.105 traddr: 10.0.0.2 00:11:26.105 eflags: none 00:11:26.105 sectype: none 00:11:26.105 =====Discovery Log Entry 2====== 00:11:26.105 trtype: tcp 00:11:26.105 adrfam: ipv4 00:11:26.105 subtype: nvme subsystem 00:11:26.105 treq: not required 00:11:26.105 portid: 0 00:11:26.105 trsvcid: 4420 00:11:26.105 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:26.105 traddr: 10.0.0.2 00:11:26.105 eflags: none 00:11:26.105 sectype: none 00:11:26.105 =====Discovery Log Entry 3====== 00:11:26.105 trtype: tcp 00:11:26.105 adrfam: ipv4 00:11:26.105 subtype: nvme subsystem 00:11:26.105 treq: not required 00:11:26.105 portid: 0 00:11:26.105 trsvcid: 4420 00:11:26.105 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:26.105 traddr: 10.0.0.2 00:11:26.105 eflags: none 00:11:26.105 sectype: none 00:11:26.105 =====Discovery Log Entry 4====== 00:11:26.105 trtype: tcp 00:11:26.105 adrfam: ipv4 00:11:26.105 subtype: nvme subsystem 00:11:26.105 treq: not required 00:11:26.105 portid: 0 00:11:26.105 trsvcid: 4420 00:11:26.105 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:26.105 traddr: 10.0.0.2 00:11:26.105 eflags: none 00:11:26.105 sectype: none 00:11:26.105 =====Discovery Log Entry 5====== 00:11:26.105 trtype: tcp 00:11:26.105 adrfam: ipv4 00:11:26.105 subtype: discovery subsystem referral 00:11:26.105 treq: not required 00:11:26.105 portid: 0 00:11:26.105 trsvcid: 4430 00:11:26.105 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:26.105 traddr: 10.0.0.2 00:11:26.105 eflags: none 00:11:26.105 sectype: none 00:11:26.105 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:26.105 Perform nvmf subsystem discovery via RPC 00:11:26.105 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:26.105 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.105 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.105 [ 00:11:26.105 { 00:11:26.105 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:26.105 "subtype": "Discovery", 00:11:26.105 "listen_addresses": [ 00:11:26.105 { 00:11:26.105 "trtype": "TCP", 00:11:26.105 "adrfam": "IPv4", 00:11:26.105 "traddr": "10.0.0.2", 00:11:26.105 "trsvcid": "4420" 00:11:26.105 } 00:11:26.105 ], 00:11:26.105 "allow_any_host": true, 00:11:26.105 "hosts": [] 00:11:26.105 }, 00:11:26.105 { 00:11:26.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:26.105 "subtype": "NVMe", 00:11:26.105 "listen_addresses": [ 00:11:26.105 { 00:11:26.105 "trtype": "TCP", 00:11:26.105 "adrfam": "IPv4", 00:11:26.105 "traddr": "10.0.0.2", 00:11:26.105 "trsvcid": "4420" 00:11:26.105 } 00:11:26.105 ], 00:11:26.105 "allow_any_host": true, 00:11:26.105 "hosts": [], 00:11:26.105 "serial_number": "SPDK00000000000001", 00:11:26.105 "model_number": "SPDK bdev Controller", 00:11:26.105 "max_namespaces": 32, 00:11:26.105 "min_cntlid": 1, 00:11:26.105 "max_cntlid": 65519, 00:11:26.105 "namespaces": [ 00:11:26.105 { 00:11:26.105 "nsid": 1, 00:11:26.105 "bdev_name": "Null1", 00:11:26.105 "name": "Null1", 00:11:26.105 "nguid": "1BCC5B173A024B10844639E2FFECF8FD", 00:11:26.105 "uuid": "1bcc5b17-3a02-4b10-8446-39e2ffecf8fd" 00:11:26.105 } 00:11:26.105 ] 00:11:26.105 }, 00:11:26.105 { 00:11:26.105 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:26.106 "subtype": "NVMe", 00:11:26.106 "listen_addresses": [ 00:11:26.106 { 00:11:26.106 "trtype": "TCP", 00:11:26.106 "adrfam": "IPv4", 00:11:26.106 "traddr": "10.0.0.2", 00:11:26.106 "trsvcid": "4420" 00:11:26.106 } 00:11:26.106 ], 00:11:26.106 "allow_any_host": true, 00:11:26.106 "hosts": [], 00:11:26.106 "serial_number": "SPDK00000000000002", 00:11:26.106 "model_number": "SPDK bdev Controller", 00:11:26.106 "max_namespaces": 32, 00:11:26.106 "min_cntlid": 1, 00:11:26.106 "max_cntlid": 65519, 00:11:26.106 "namespaces": [ 00:11:26.106 { 00:11:26.106 "nsid": 1, 00:11:26.106 "bdev_name": "Null2", 00:11:26.106 "name": "Null2", 00:11:26.106 "nguid": "4CCB84A2D50648B5BF9D8AEF20DB210D", 00:11:26.106 "uuid": "4ccb84a2-d506-48b5-bf9d-8aef20db210d" 00:11:26.106 } 00:11:26.106 ] 00:11:26.106 }, 00:11:26.106 { 00:11:26.106 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:26.106 "subtype": "NVMe", 00:11:26.106 "listen_addresses": [ 00:11:26.106 { 00:11:26.106 "trtype": "TCP", 00:11:26.106 "adrfam": "IPv4", 00:11:26.106 "traddr": "10.0.0.2", 00:11:26.106 "trsvcid": "4420" 00:11:26.106 } 00:11:26.106 ], 00:11:26.106 "allow_any_host": true, 00:11:26.106 "hosts": [], 00:11:26.106 "serial_number": "SPDK00000000000003", 00:11:26.106 "model_number": "SPDK bdev Controller", 00:11:26.106 "max_namespaces": 32, 00:11:26.106 "min_cntlid": 1, 00:11:26.106 "max_cntlid": 65519, 00:11:26.106 "namespaces": [ 00:11:26.106 { 00:11:26.106 "nsid": 1, 00:11:26.106 "bdev_name": "Null3", 00:11:26.106 "name": "Null3", 00:11:26.106 "nguid": "9D9E898667844B9FB304EC3BB997BAFA", 00:11:26.106 "uuid": "9d9e8986-6784-4b9f-b304-ec3bb997bafa" 00:11:26.106 } 00:11:26.106 ] 00:11:26.106 }, 00:11:26.106 { 00:11:26.106 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:26.106 "subtype": "NVMe", 00:11:26.106 "listen_addresses": [ 00:11:26.106 { 00:11:26.106 "trtype": "TCP", 00:11:26.106 "adrfam": "IPv4", 00:11:26.106 "traddr": "10.0.0.2", 00:11:26.106 "trsvcid": "4420" 00:11:26.106 } 00:11:26.106 ], 00:11:26.106 "allow_any_host": true, 00:11:26.106 "hosts": [], 00:11:26.106 "serial_number": "SPDK00000000000004", 00:11:26.106 "model_number": "SPDK bdev Controller", 00:11:26.106 "max_namespaces": 32, 00:11:26.106 "min_cntlid": 1, 00:11:26.106 "max_cntlid": 65519, 00:11:26.106 "namespaces": [ 00:11:26.106 { 00:11:26.106 "nsid": 1, 00:11:26.106 "bdev_name": "Null4", 00:11:26.106 "name": "Null4", 00:11:26.106 "nguid": "A3E0AE52004D4ADEAE6CE35883313F7E", 00:11:26.106 "uuid": "a3e0ae52-004d-4ade-ae6c-e35883313f7e" 00:11:26.106 } 00:11:26.106 ] 00:11:26.106 } 00:11:26.106 ] 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.106 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.106 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:26.106 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:26.106 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:26.106 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:26.106 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:26.106 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:26.106 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.106 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:26.106 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.106 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.106 rmmod nvme_tcp 00:11:26.106 rmmod nvme_fabrics 00:11:26.106 rmmod nvme_keyring 00:11:26.367 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.367 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:26.367 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:26.367 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3666445 ']' 00:11:26.367 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3666445 00:11:26.367 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 3666445 ']' 00:11:26.367 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 3666445 00:11:26.367 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:11:26.367 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:26.367 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3666445 00:11:26.368 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:26.368 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:26.368 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3666445' 00:11:26.368 killing process with pid 3666445 00:11:26.368 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 3666445 00:11:26.368 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 3666445 00:11:26.368 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:26.368 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:26.368 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:26.368 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:26.368 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:26.368 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:26.368 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:26.368 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:26.368 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:26.368 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.368 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.368 15:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:29.005 00:11:29.005 real 0m11.724s 00:11:29.005 user 0m8.906s 00:11:29.005 sys 0m6.150s 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:29.005 ************************************ 00:11:29.005 END TEST nvmf_target_discovery 00:11:29.005 ************************************ 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:29.005 ************************************ 00:11:29.005 START TEST nvmf_referrals 00:11:29.005 ************************************ 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:29.005 * Looking for test storage... 00:11:29.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:29.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.005 --rc genhtml_branch_coverage=1 00:11:29.005 --rc genhtml_function_coverage=1 00:11:29.005 --rc genhtml_legend=1 00:11:29.005 --rc geninfo_all_blocks=1 00:11:29.005 --rc geninfo_unexecuted_blocks=1 00:11:29.005 00:11:29.005 ' 00:11:29.005 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:29.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.006 --rc genhtml_branch_coverage=1 00:11:29.006 --rc genhtml_function_coverage=1 00:11:29.006 --rc genhtml_legend=1 00:11:29.006 --rc geninfo_all_blocks=1 00:11:29.006 --rc geninfo_unexecuted_blocks=1 00:11:29.006 00:11:29.006 ' 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:29.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.006 --rc genhtml_branch_coverage=1 00:11:29.006 --rc genhtml_function_coverage=1 00:11:29.006 --rc genhtml_legend=1 00:11:29.006 --rc geninfo_all_blocks=1 00:11:29.006 --rc geninfo_unexecuted_blocks=1 00:11:29.006 00:11:29.006 ' 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:29.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.006 --rc genhtml_branch_coverage=1 00:11:29.006 --rc genhtml_function_coverage=1 00:11:29.006 --rc genhtml_legend=1 00:11:29.006 --rc geninfo_all_blocks=1 00:11:29.006 --rc geninfo_unexecuted_blocks=1 00:11:29.006 00:11:29.006 ' 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:29.006 15:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.150 15:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.150 15:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:37.150 15:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:37.150 15:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:37.150 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:37.150 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:37.150 Found net devices under 0000:31:00.0: cvl_0_0 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.150 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:37.151 Found net devices under 0000:31:00.1: cvl_0_1 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:37.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:11:37.151 00:11:37.151 --- 10.0.0.2 ping statistics --- 00:11:37.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.151 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:37.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:11:37.151 00:11:37.151 --- 10.0.0.1 ping statistics --- 00:11:37.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.151 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3670921 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3670921 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 3670921 ']' 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:37.151 15:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.151 [2024-11-06 15:23:54.442719] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:11:37.151 [2024-11-06 15:23:54.442801] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.151 [2024-11-06 15:23:54.544148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.151 [2024-11-06 15:23:54.597642] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.151 [2024-11-06 15:23:54.597695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.151 [2024-11-06 15:23:54.597704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.151 [2024-11-06 15:23:54.597716] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.151 [2024-11-06 15:23:54.597722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.151 [2024-11-06 15:23:54.600176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.151 [2024-11-06 15:23:54.600339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.151 [2024-11-06 15:23:54.600502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.151 [2024-11-06 15:23:54.600502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.413 [2024-11-06 15:23:55.326936] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.413 [2024-11-06 15:23:55.357998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.413 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.674 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:37.675 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:37.935 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:37.935 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:37.935 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:37.935 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.935 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.935 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.935 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:37.935 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.935 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.935 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.935 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:37.935 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.935 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.935 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.935 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:37.935 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:37.935 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.935 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.935 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.935 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:37.935 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:37.936 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:37.936 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:37.936 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:37.936 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:37.936 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:38.196 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:38.458 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:38.458 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:38.458 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:38.458 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:38.458 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:38.458 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.458 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:38.458 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:38.458 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:38.458 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:38.458 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:38.458 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.458 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:38.718 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:38.718 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:38.718 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.718 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.718 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.718 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:38.718 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:38.718 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:38.718 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:38.718 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.718 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.718 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:38.718 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.979 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:38.979 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:38.979 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:38.979 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:38.979 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:38.979 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.979 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:38.979 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:38.979 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:38.979 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:38.979 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:38.979 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:38.979 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:38.979 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.979 15:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:39.239 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:39.239 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:39.239 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:39.239 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:39.239 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.239 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:39.239 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:39.239 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:39.239 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.239 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.239 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.239 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:39.239 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:39.239 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.239 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.500 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.500 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:39.500 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:39.500 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:39.500 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:39.500 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.500 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:39.500 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:39.761 rmmod nvme_tcp 00:11:39.761 rmmod nvme_fabrics 00:11:39.761 rmmod nvme_keyring 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3670921 ']' 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3670921 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 3670921 ']' 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 3670921 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3670921 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3670921' 00:11:39.761 killing process with pid 3670921 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 3670921 00:11:39.761 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 3670921 00:11:40.021 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:40.021 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:40.021 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:40.021 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:40.021 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:40.021 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:40.021 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:40.021 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:40.021 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:40.021 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.021 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.021 15:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.933 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:41.933 00:11:41.933 real 0m13.330s 00:11:41.933 user 0m15.764s 00:11:41.933 sys 0m6.586s 00:11:41.933 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:41.933 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:41.933 ************************************ 00:11:41.933 END TEST nvmf_referrals 00:11:41.933 ************************************ 00:11:41.933 15:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:41.933 15:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:41.933 15:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:41.933 15:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:42.194 ************************************ 00:11:42.194 START TEST nvmf_connect_disconnect 00:11:42.194 ************************************ 00:11:42.194 15:23:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:42.194 * Looking for test storage... 00:11:42.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.194 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:42.194 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:42.194 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:42.194 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:42.194 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.194 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.194 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.194 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.194 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.194 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.194 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:42.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.195 --rc genhtml_branch_coverage=1 00:11:42.195 --rc genhtml_function_coverage=1 00:11:42.195 --rc genhtml_legend=1 00:11:42.195 --rc geninfo_all_blocks=1 00:11:42.195 --rc geninfo_unexecuted_blocks=1 00:11:42.195 00:11:42.195 ' 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:42.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.195 --rc genhtml_branch_coverage=1 00:11:42.195 --rc genhtml_function_coverage=1 00:11:42.195 --rc genhtml_legend=1 00:11:42.195 --rc geninfo_all_blocks=1 00:11:42.195 --rc geninfo_unexecuted_blocks=1 00:11:42.195 00:11:42.195 ' 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:42.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.195 --rc genhtml_branch_coverage=1 00:11:42.195 --rc genhtml_function_coverage=1 00:11:42.195 --rc genhtml_legend=1 00:11:42.195 --rc geninfo_all_blocks=1 00:11:42.195 --rc geninfo_unexecuted_blocks=1 00:11:42.195 00:11:42.195 ' 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:42.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.195 --rc genhtml_branch_coverage=1 00:11:42.195 --rc genhtml_function_coverage=1 00:11:42.195 --rc genhtml_legend=1 00:11:42.195 --rc geninfo_all_blocks=1 00:11:42.195 --rc geninfo_unexecuted_blocks=1 00:11:42.195 00:11:42.195 ' 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:42.195 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:42.196 15:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:50.332 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:50.332 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:50.332 Found net devices under 0000:31:00.0: cvl_0_0 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:50.332 Found net devices under 0000:31:00.1: cvl_0_1 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.332 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:50.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:11:50.333 00:11:50.333 --- 10.0.0.2 ping statistics --- 00:11:50.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.333 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:50.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:11:50.333 00:11:50.333 --- 10.0.0.1 ping statistics --- 00:11:50.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.333 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3676142 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3676142 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 3676142 ']' 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:50.333 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.333 [2024-11-06 15:24:07.878001] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:11:50.333 [2024-11-06 15:24:07.878065] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.333 [2024-11-06 15:24:07.977351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.333 [2024-11-06 15:24:08.029927] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.333 [2024-11-06 15:24:08.029978] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.333 [2024-11-06 15:24:08.029987] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.333 [2024-11-06 15:24:08.029994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.333 [2024-11-06 15:24:08.030001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.333 [2024-11-06 15:24:08.032141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.333 [2024-11-06 15:24:08.032302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.333 [2024-11-06 15:24:08.032463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.333 [2024-11-06 15:24:08.032463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.905 [2024-11-06 15:24:08.746063] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:50.905 [2024-11-06 15:24:08.832471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:50.905 15:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:55.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.213 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:09.213 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:09.213 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:09.213 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:09.213 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:09.213 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:09.213 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:09.213 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:09.213 rmmod nvme_tcp 00:12:09.213 rmmod nvme_fabrics 00:12:09.213 rmmod nvme_keyring 00:12:09.213 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:09.213 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:09.213 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:09.213 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3676142 ']' 00:12:09.213 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3676142 00:12:09.213 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3676142 ']' 00:12:09.213 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 3676142 00:12:09.213 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:12:09.213 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:09.213 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3676142 00:12:09.474 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:09.474 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:09.474 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3676142' 00:12:09.474 killing process with pid 3676142 00:12:09.474 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 3676142 00:12:09.474 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 3676142 00:12:09.474 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:09.474 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:09.474 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:09.474 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:09.474 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:09.474 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:09.474 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:09.474 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:09.474 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:09.474 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.474 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.474 15:24:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.022 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:12.022 00:12:12.022 real 0m29.528s 00:12:12.022 user 1m19.233s 00:12:12.022 sys 0m7.160s 00:12:12.022 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:12.022 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:12.022 ************************************ 00:12:12.022 END TEST nvmf_connect_disconnect 00:12:12.022 ************************************ 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:12.023 ************************************ 00:12:12.023 START TEST nvmf_multitarget 00:12:12.023 ************************************ 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:12.023 * Looking for test storage... 00:12:12.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:12.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.023 --rc genhtml_branch_coverage=1 00:12:12.023 --rc genhtml_function_coverage=1 00:12:12.023 --rc genhtml_legend=1 00:12:12.023 --rc geninfo_all_blocks=1 00:12:12.023 --rc geninfo_unexecuted_blocks=1 00:12:12.023 00:12:12.023 ' 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:12.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.023 --rc genhtml_branch_coverage=1 00:12:12.023 --rc genhtml_function_coverage=1 00:12:12.023 --rc genhtml_legend=1 00:12:12.023 --rc geninfo_all_blocks=1 00:12:12.023 --rc geninfo_unexecuted_blocks=1 00:12:12.023 00:12:12.023 ' 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:12.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.023 --rc genhtml_branch_coverage=1 00:12:12.023 --rc genhtml_function_coverage=1 00:12:12.023 --rc genhtml_legend=1 00:12:12.023 --rc geninfo_all_blocks=1 00:12:12.023 --rc geninfo_unexecuted_blocks=1 00:12:12.023 00:12:12.023 ' 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:12.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.023 --rc genhtml_branch_coverage=1 00:12:12.023 --rc genhtml_function_coverage=1 00:12:12.023 --rc genhtml_legend=1 00:12:12.023 --rc geninfo_all_blocks=1 00:12:12.023 --rc geninfo_unexecuted_blocks=1 00:12:12.023 00:12:12.023 ' 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:12.023 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.024 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.024 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.024 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:12.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:12.024 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:12.024 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:12.024 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:12.024 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:12.024 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:12.024 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:12.024 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.024 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:12.024 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:12.024 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:12.024 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.024 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.024 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.024 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:12.024 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:12.024 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:12.024 15:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:20.162 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.162 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:20.162 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:20.162 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:20.162 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:20.162 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:20.162 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:20.162 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:20.162 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:20.162 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:20.163 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:20.163 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:20.163 Found net devices under 0000:31:00.0: cvl_0_0 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:20.163 Found net devices under 0000:31:00.1: cvl_0_1 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.163 15:24:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.163 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.163 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.163 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:20.163 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.163 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.163 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.163 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:20.163 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:20.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:12:20.163 00:12:20.163 --- 10.0.0.2 ping statistics --- 00:12:20.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.163 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:12:20.163 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:12:20.163 00:12:20.163 --- 10.0.0.1 ping statistics --- 00:12:20.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.163 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:12:20.163 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.163 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:20.163 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:20.163 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.163 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:20.163 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:20.163 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.163 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:20.163 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:20.163 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:20.164 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:20.164 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:20.164 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:20.164 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3684710 00:12:20.164 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3684710 00:12:20.164 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.164 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 3684710 ']' 00:12:20.164 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.164 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:20.164 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.164 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:20.164 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:20.164 [2024-11-06 15:24:37.411095] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:12:20.164 [2024-11-06 15:24:37.411165] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.164 [2024-11-06 15:24:37.511802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.164 [2024-11-06 15:24:37.564387] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.164 [2024-11-06 15:24:37.564441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.164 [2024-11-06 15:24:37.564449] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.164 [2024-11-06 15:24:37.564457] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.164 [2024-11-06 15:24:37.564463] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.164 [2024-11-06 15:24:37.566871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.164 [2024-11-06 15:24:37.567033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.164 [2024-11-06 15:24:37.567195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.164 [2024-11-06 15:24:37.567195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.425 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:20.425 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:12:20.425 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:20.425 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:20.425 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:20.425 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.425 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:20.425 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:20.425 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:20.425 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:20.425 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:20.685 "nvmf_tgt_1" 00:12:20.685 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:20.685 "nvmf_tgt_2" 00:12:20.685 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:20.685 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:20.946 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:20.946 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:20.946 true 00:12:20.946 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:21.207 true 00:12:21.207 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:21.207 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:21.207 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:21.207 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:21.207 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:21.207 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:21.207 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:21.207 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:21.207 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:21.207 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:21.207 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:21.207 rmmod nvme_tcp 00:12:21.207 rmmod nvme_fabrics 00:12:21.207 rmmod nvme_keyring 00:12:21.207 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:21.207 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:21.207 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:21.207 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3684710 ']' 00:12:21.207 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3684710 00:12:21.207 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 3684710 ']' 00:12:21.207 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 3684710 00:12:21.207 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:12:21.207 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:21.207 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3684710 00:12:21.468 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:21.468 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:21.468 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3684710' 00:12:21.468 killing process with pid 3684710 00:12:21.468 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 3684710 00:12:21.468 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 3684710 00:12:21.468 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:21.468 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:21.468 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:21.468 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:21.468 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:21.468 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:21.468 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:21.468 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:21.468 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:21.468 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.468 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.468 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:24.012 00:12:24.012 real 0m11.899s 00:12:24.012 user 0m10.138s 00:12:24.012 sys 0m6.241s 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:24.012 ************************************ 00:12:24.012 END TEST nvmf_multitarget 00:12:24.012 ************************************ 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:24.012 ************************************ 00:12:24.012 START TEST nvmf_rpc 00:12:24.012 ************************************ 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:24.012 * Looking for test storage... 00:12:24.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:24.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.012 --rc genhtml_branch_coverage=1 00:12:24.012 --rc genhtml_function_coverage=1 00:12:24.012 --rc genhtml_legend=1 00:12:24.012 --rc geninfo_all_blocks=1 00:12:24.012 --rc geninfo_unexecuted_blocks=1 00:12:24.012 00:12:24.012 ' 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:24.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.012 --rc genhtml_branch_coverage=1 00:12:24.012 --rc genhtml_function_coverage=1 00:12:24.012 --rc genhtml_legend=1 00:12:24.012 --rc geninfo_all_blocks=1 00:12:24.012 --rc geninfo_unexecuted_blocks=1 00:12:24.012 00:12:24.012 ' 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:24.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.012 --rc genhtml_branch_coverage=1 00:12:24.012 --rc genhtml_function_coverage=1 00:12:24.012 --rc genhtml_legend=1 00:12:24.012 --rc geninfo_all_blocks=1 00:12:24.012 --rc geninfo_unexecuted_blocks=1 00:12:24.012 00:12:24.012 ' 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:24.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.012 --rc genhtml_branch_coverage=1 00:12:24.012 --rc genhtml_function_coverage=1 00:12:24.012 --rc genhtml_legend=1 00:12:24.012 --rc geninfo_all_blocks=1 00:12:24.012 --rc geninfo_unexecuted_blocks=1 00:12:24.012 00:12:24.012 ' 00:12:24.012 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:24.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:24.013 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:32.153 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:32.153 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:32.153 Found net devices under 0000:31:00.0: cvl_0_0 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:32.153 Found net devices under 0000:31:00.1: cvl_0_1 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.153 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:32.153 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:32.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:12:32.154 00:12:32.154 --- 10.0.0.2 ping statistics --- 00:12:32.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.154 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:12:32.154 00:12:32.154 --- 10.0.0.1 ping statistics --- 00:12:32.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.154 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3689217 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3689217 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 3689217 ']' 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:32.154 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.154 [2024-11-06 15:24:49.428409] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:12:32.154 [2024-11-06 15:24:49.428476] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.154 [2024-11-06 15:24:49.529653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.154 [2024-11-06 15:24:49.582971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.154 [2024-11-06 15:24:49.583025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.154 [2024-11-06 15:24:49.583033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.154 [2024-11-06 15:24:49.583041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.154 [2024-11-06 15:24:49.583047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.154 [2024-11-06 15:24:49.585446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.154 [2024-11-06 15:24:49.585607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.154 [2024-11-06 15:24:49.585792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.154 [2024-11-06 15:24:49.585803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.414 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:32.414 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:12:32.414 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:32.414 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:32.414 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.414 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.414 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:32.414 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.414 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.414 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.414 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:32.414 "tick_rate": 2400000000, 00:12:32.414 "poll_groups": [ 00:12:32.414 { 00:12:32.414 "name": "nvmf_tgt_poll_group_000", 00:12:32.414 "admin_qpairs": 0, 00:12:32.414 "io_qpairs": 0, 00:12:32.414 "current_admin_qpairs": 0, 00:12:32.414 "current_io_qpairs": 0, 00:12:32.414 "pending_bdev_io": 0, 00:12:32.414 "completed_nvme_io": 0, 00:12:32.414 "transports": [] 00:12:32.414 }, 00:12:32.414 { 00:12:32.414 "name": "nvmf_tgt_poll_group_001", 00:12:32.414 "admin_qpairs": 0, 00:12:32.414 "io_qpairs": 0, 00:12:32.414 "current_admin_qpairs": 0, 00:12:32.414 "current_io_qpairs": 0, 00:12:32.414 "pending_bdev_io": 0, 00:12:32.414 "completed_nvme_io": 0, 00:12:32.414 "transports": [] 00:12:32.414 }, 00:12:32.414 { 00:12:32.414 "name": "nvmf_tgt_poll_group_002", 00:12:32.414 "admin_qpairs": 0, 00:12:32.414 "io_qpairs": 0, 00:12:32.414 "current_admin_qpairs": 0, 00:12:32.414 "current_io_qpairs": 0, 00:12:32.414 "pending_bdev_io": 0, 00:12:32.414 "completed_nvme_io": 0, 00:12:32.414 "transports": [] 00:12:32.414 }, 00:12:32.414 { 00:12:32.414 "name": "nvmf_tgt_poll_group_003", 00:12:32.414 "admin_qpairs": 0, 00:12:32.414 "io_qpairs": 0, 00:12:32.414 "current_admin_qpairs": 0, 00:12:32.414 "current_io_qpairs": 0, 00:12:32.414 "pending_bdev_io": 0, 00:12:32.414 "completed_nvme_io": 0, 00:12:32.414 "transports": [] 00:12:32.414 } 00:12:32.414 ] 00:12:32.414 }' 00:12:32.414 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:32.414 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:32.414 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:32.414 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:32.414 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:32.414 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.674 [2024-11-06 15:24:50.407970] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:32.674 "tick_rate": 2400000000, 00:12:32.674 "poll_groups": [ 00:12:32.674 { 00:12:32.674 "name": "nvmf_tgt_poll_group_000", 00:12:32.674 "admin_qpairs": 0, 00:12:32.674 "io_qpairs": 0, 00:12:32.674 "current_admin_qpairs": 0, 00:12:32.674 "current_io_qpairs": 0, 00:12:32.674 "pending_bdev_io": 0, 00:12:32.674 "completed_nvme_io": 0, 00:12:32.674 "transports": [ 00:12:32.674 { 00:12:32.674 "trtype": "TCP" 00:12:32.674 } 00:12:32.674 ] 00:12:32.674 }, 00:12:32.674 { 00:12:32.674 "name": "nvmf_tgt_poll_group_001", 00:12:32.674 "admin_qpairs": 0, 00:12:32.674 "io_qpairs": 0, 00:12:32.674 "current_admin_qpairs": 0, 00:12:32.674 "current_io_qpairs": 0, 00:12:32.674 "pending_bdev_io": 0, 00:12:32.674 "completed_nvme_io": 0, 00:12:32.674 "transports": [ 00:12:32.674 { 00:12:32.674 "trtype": "TCP" 00:12:32.674 } 00:12:32.674 ] 00:12:32.674 }, 00:12:32.674 { 00:12:32.674 "name": "nvmf_tgt_poll_group_002", 00:12:32.674 "admin_qpairs": 0, 00:12:32.674 "io_qpairs": 0, 00:12:32.674 "current_admin_qpairs": 0, 00:12:32.674 "current_io_qpairs": 0, 00:12:32.674 "pending_bdev_io": 0, 00:12:32.674 "completed_nvme_io": 0, 00:12:32.674 "transports": [ 00:12:32.674 { 00:12:32.674 "trtype": "TCP" 00:12:32.674 } 00:12:32.674 ] 00:12:32.674 }, 00:12:32.674 { 00:12:32.674 "name": "nvmf_tgt_poll_group_003", 00:12:32.674 "admin_qpairs": 0, 00:12:32.674 "io_qpairs": 0, 00:12:32.674 "current_admin_qpairs": 0, 00:12:32.674 "current_io_qpairs": 0, 00:12:32.674 "pending_bdev_io": 0, 00:12:32.674 "completed_nvme_io": 0, 00:12:32.674 "transports": [ 00:12:32.674 { 00:12:32.674 "trtype": "TCP" 00:12:32.674 } 00:12:32.674 ] 00:12:32.674 } 00:12:32.674 ] 00:12:32.674 }' 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.674 Malloc1 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.674 [2024-11-06 15:24:50.613669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:32.674 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:12:32.674 [2024-11-06 15:24:50.650569] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6' 00:12:32.935 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:32.935 could not add new controller: failed to write to nvme-fabrics device 00:12:32.935 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:32.935 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:32.935 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:32.935 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:32.935 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:32.935 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.935 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.935 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.935 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.318 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.318 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:34.318 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.318 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:34.318 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:36.226 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:36.226 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:36.226 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.226 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:36.226 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.226 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:36.226 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.487 [2024-11-06 15:24:54.374651] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6' 00:12:36.487 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:36.487 could not add new controller: failed to write to nvme-fabrics device 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.487 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.398 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.398 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:38.398 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.398 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:38.398 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:40.444 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:40.444 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:40.444 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.444 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:40.444 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.444 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:40.444 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.444 [2024-11-06 15:24:58.126067] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.444 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.829 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.830 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:41.830 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.830 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:41.830 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:43.743 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.005 [2024-11-06 15:25:01.873479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.005 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.917 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.917 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:45.917 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.917 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:45.918 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.836 [2024-11-06 15:25:05.593458] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.836 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.220 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.220 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:49.220 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.220 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:49.220 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:51.130 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:51.130 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:51.130 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.391 [2024-11-06 15:25:09.304626] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.391 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.302 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.302 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:53.302 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.302 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:53.302 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:55.215 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:55.215 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:55.215 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.215 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:55.215 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.215 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:55.215 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.215 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.215 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:55.215 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:55.215 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.215 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:55.215 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.215 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:55.215 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.215 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.215 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.215 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.215 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.215 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.215 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.215 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.215 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:55.215 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.215 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.215 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.215 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.215 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.215 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.215 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.216 [2024-11-06 15:25:13.038638] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.216 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.216 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:55.216 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.216 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.216 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.216 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.216 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.216 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.216 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.216 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.601 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.601 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:56.601 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.601 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:56.601 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.148 [2024-11-06 15:25:16.773602] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.148 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 [2024-11-06 15:25:16.837766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 [2024-11-06 15:25:16.905955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 [2024-11-06 15:25:16.978189] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.149 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.149 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:59.149 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.149 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.149 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.149 [2024-11-06 15:25:17.042383] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.149 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.149 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:59.149 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.149 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:59.150 "tick_rate": 2400000000, 00:12:59.150 "poll_groups": [ 00:12:59.150 { 00:12:59.150 "name": "nvmf_tgt_poll_group_000", 00:12:59.150 "admin_qpairs": 0, 00:12:59.150 "io_qpairs": 224, 00:12:59.150 "current_admin_qpairs": 0, 00:12:59.150 "current_io_qpairs": 0, 00:12:59.150 "pending_bdev_io": 0, 00:12:59.150 "completed_nvme_io": 519, 00:12:59.150 "transports": [ 00:12:59.150 { 00:12:59.150 "trtype": "TCP" 00:12:59.150 } 00:12:59.150 ] 00:12:59.150 }, 00:12:59.150 { 00:12:59.150 "name": "nvmf_tgt_poll_group_001", 00:12:59.150 "admin_qpairs": 1, 00:12:59.150 "io_qpairs": 223, 00:12:59.150 "current_admin_qpairs": 0, 00:12:59.150 "current_io_qpairs": 0, 00:12:59.150 "pending_bdev_io": 0, 00:12:59.150 "completed_nvme_io": 224, 00:12:59.150 "transports": [ 00:12:59.150 { 00:12:59.150 "trtype": "TCP" 00:12:59.150 } 00:12:59.150 ] 00:12:59.150 }, 00:12:59.150 { 00:12:59.150 "name": "nvmf_tgt_poll_group_002", 00:12:59.150 "admin_qpairs": 6, 00:12:59.150 "io_qpairs": 218, 00:12:59.150 "current_admin_qpairs": 0, 00:12:59.150 "current_io_qpairs": 0, 00:12:59.150 "pending_bdev_io": 0, 00:12:59.150 "completed_nvme_io": 218, 00:12:59.150 "transports": [ 00:12:59.150 { 00:12:59.150 "trtype": "TCP" 00:12:59.150 } 00:12:59.150 ] 00:12:59.150 }, 00:12:59.150 { 00:12:59.150 "name": "nvmf_tgt_poll_group_003", 00:12:59.150 "admin_qpairs": 0, 00:12:59.150 "io_qpairs": 224, 00:12:59.150 "current_admin_qpairs": 0, 00:12:59.150 "current_io_qpairs": 0, 00:12:59.150 "pending_bdev_io": 0, 00:12:59.150 "completed_nvme_io": 278, 00:12:59.150 "transports": [ 00:12:59.150 { 00:12:59.150 "trtype": "TCP" 00:12:59.150 } 00:12:59.150 ] 00:12:59.150 } 00:12:59.150 ] 00:12:59.150 }' 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:59.150 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:59.412 rmmod nvme_tcp 00:12:59.412 rmmod nvme_fabrics 00:12:59.412 rmmod nvme_keyring 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3689217 ']' 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3689217 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 3689217 ']' 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 3689217 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3689217 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3689217' 00:12:59.412 killing process with pid 3689217 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 3689217 00:12:59.412 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 3689217 00:12:59.673 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:59.673 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:59.673 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:59.673 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:59.673 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:59.673 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:59.673 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:59.673 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:59.673 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:59.673 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.673 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.673 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.586 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:01.586 00:13:01.586 real 0m38.031s 00:13:01.586 user 1m53.360s 00:13:01.586 sys 0m8.049s 00:13:01.586 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:01.586 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.586 ************************************ 00:13:01.586 END TEST nvmf_rpc 00:13:01.586 ************************************ 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:01.848 ************************************ 00:13:01.848 START TEST nvmf_invalid 00:13:01.848 ************************************ 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:01.848 * Looking for test storage... 00:13:01.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:01.848 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:02.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.109 --rc genhtml_branch_coverage=1 00:13:02.109 --rc genhtml_function_coverage=1 00:13:02.109 --rc genhtml_legend=1 00:13:02.109 --rc geninfo_all_blocks=1 00:13:02.109 --rc geninfo_unexecuted_blocks=1 00:13:02.109 00:13:02.109 ' 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:02.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.109 --rc genhtml_branch_coverage=1 00:13:02.109 --rc genhtml_function_coverage=1 00:13:02.109 --rc genhtml_legend=1 00:13:02.109 --rc geninfo_all_blocks=1 00:13:02.109 --rc geninfo_unexecuted_blocks=1 00:13:02.109 00:13:02.109 ' 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:02.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.109 --rc genhtml_branch_coverage=1 00:13:02.109 --rc genhtml_function_coverage=1 00:13:02.109 --rc genhtml_legend=1 00:13:02.109 --rc geninfo_all_blocks=1 00:13:02.109 --rc geninfo_unexecuted_blocks=1 00:13:02.109 00:13:02.109 ' 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:02.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.109 --rc genhtml_branch_coverage=1 00:13:02.109 --rc genhtml_function_coverage=1 00:13:02.109 --rc genhtml_legend=1 00:13:02.109 --rc geninfo_all_blocks=1 00:13:02.109 --rc geninfo_unexecuted_blocks=1 00:13:02.109 00:13:02.109 ' 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.109 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:02.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:02.110 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:10.253 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:10.253 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:10.253 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:10.253 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:10.253 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:10.253 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:10.253 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:10.253 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:10.253 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:10.253 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:10.253 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:10.253 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:10.253 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:10.253 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:10.253 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:10.253 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:10.253 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:10.254 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:10.254 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:10.254 Found net devices under 0000:31:00.0: cvl_0_0 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:10.254 Found net devices under 0000:31:00.1: cvl_0_1 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:10.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:13:10.254 00:13:10.254 --- 10.0.0.2 ping statistics --- 00:13:10.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.254 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:10.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:13:10.254 00:13:10.254 --- 10.0.0.1 ping statistics --- 00:13:10.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.254 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3699084 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3699084 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 3699084 ']' 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.254 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:10.255 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.255 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:10.255 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:10.255 [2024-11-06 15:25:27.629479] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:13:10.255 [2024-11-06 15:25:27.629543] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.255 [2024-11-06 15:25:27.731121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:10.255 [2024-11-06 15:25:27.785339] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.255 [2024-11-06 15:25:27.785391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.255 [2024-11-06 15:25:27.785399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.255 [2024-11-06 15:25:27.785407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.255 [2024-11-06 15:25:27.785413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.255 [2024-11-06 15:25:27.787811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.255 [2024-11-06 15:25:27.787979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.255 [2024-11-06 15:25:27.788140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.255 [2024-11-06 15:25:27.788143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.516 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:10.516 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:13:10.516 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:10.516 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:10.516 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:10.516 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.516 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:10.516 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode21728 00:13:10.776 [2024-11-06 15:25:28.722109] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:11.038 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:11.038 { 00:13:11.038 "nqn": "nqn.2016-06.io.spdk:cnode21728", 00:13:11.038 "tgt_name": "foobar", 00:13:11.038 "method": "nvmf_create_subsystem", 00:13:11.038 "req_id": 1 00:13:11.038 } 00:13:11.038 Got JSON-RPC error response 00:13:11.038 response: 00:13:11.038 { 00:13:11.038 "code": -32603, 00:13:11.038 "message": "Unable to find target foobar" 00:13:11.038 }' 00:13:11.038 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:11.038 { 00:13:11.038 "nqn": "nqn.2016-06.io.spdk:cnode21728", 00:13:11.038 "tgt_name": "foobar", 00:13:11.038 "method": "nvmf_create_subsystem", 00:13:11.038 "req_id": 1 00:13:11.038 } 00:13:11.038 Got JSON-RPC error response 00:13:11.038 response: 00:13:11.038 { 00:13:11.038 "code": -32603, 00:13:11.038 "message": "Unable to find target foobar" 00:13:11.038 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:11.038 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:11.038 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode14353 00:13:11.038 [2024-11-06 15:25:28.931024] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14353: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:11.038 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:11.038 { 00:13:11.038 "nqn": "nqn.2016-06.io.spdk:cnode14353", 00:13:11.038 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:11.038 "method": "nvmf_create_subsystem", 00:13:11.038 "req_id": 1 00:13:11.038 } 00:13:11.038 Got JSON-RPC error response 00:13:11.038 response: 00:13:11.038 { 00:13:11.038 "code": -32602, 00:13:11.038 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:11.038 }' 00:13:11.038 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:11.038 { 00:13:11.038 "nqn": "nqn.2016-06.io.spdk:cnode14353", 00:13:11.038 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:11.038 "method": "nvmf_create_subsystem", 00:13:11.038 "req_id": 1 00:13:11.038 } 00:13:11.038 Got JSON-RPC error response 00:13:11.038 response: 00:13:11.038 { 00:13:11.038 "code": -32602, 00:13:11.038 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:11.038 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:11.038 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:11.038 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode5625 00:13:11.300 [2024-11-06 15:25:29.135731] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5625: invalid model number 'SPDK_Controller' 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:11.300 { 00:13:11.300 "nqn": "nqn.2016-06.io.spdk:cnode5625", 00:13:11.300 "model_number": "SPDK_Controller\u001f", 00:13:11.300 "method": "nvmf_create_subsystem", 00:13:11.300 "req_id": 1 00:13:11.300 } 00:13:11.300 Got JSON-RPC error response 00:13:11.300 response: 00:13:11.300 { 00:13:11.300 "code": -32602, 00:13:11.300 "message": "Invalid MN SPDK_Controller\u001f" 00:13:11.300 }' 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:11.300 { 00:13:11.300 "nqn": "nqn.2016-06.io.spdk:cnode5625", 00:13:11.300 "model_number": "SPDK_Controller\u001f", 00:13:11.300 "method": "nvmf_create_subsystem", 00:13:11.300 "req_id": 1 00:13:11.300 } 00:13:11.300 Got JSON-RPC error response 00:13:11.300 response: 00:13:11.300 { 00:13:11.300 "code": -32602, 00:13:11.300 "message": "Invalid MN SPDK_Controller\u001f" 00:13:11.300 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:11.300 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ; == \- ]] 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ';nGx}c WAIn6C<2aI4E\#' 00:13:11.562 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ';nGx}c WAIn6C<2aI4E\#' nqn.2016-06.io.spdk:cnode339 00:13:11.562 [2024-11-06 15:25:29.517216] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode339: invalid serial number ';nGx}c WAIn6C<2aI4E\#' 00:13:11.824 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:11.824 { 00:13:11.824 "nqn": "nqn.2016-06.io.spdk:cnode339", 00:13:11.824 "serial_number": ";nGx}c WAIn6C<2aI4E\\#", 00:13:11.824 "method": "nvmf_create_subsystem", 00:13:11.824 "req_id": 1 00:13:11.824 } 00:13:11.824 Got JSON-RPC error response 00:13:11.824 response: 00:13:11.824 { 00:13:11.824 "code": -32602, 00:13:11.824 "message": "Invalid SN ;nGx}c WAIn6C<2aI4E\\#" 00:13:11.824 }' 00:13:11.824 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:11.824 { 00:13:11.824 "nqn": "nqn.2016-06.io.spdk:cnode339", 00:13:11.824 "serial_number": ";nGx}c WAIn6C<2aI4E\\#", 00:13:11.824 "method": "nvmf_create_subsystem", 00:13:11.824 "req_id": 1 00:13:11.824 } 00:13:11.824 Got JSON-RPC error response 00:13:11.824 response: 00:13:11.824 { 00:13:11.824 "code": -32602, 00:13:11.824 "message": "Invalid SN ;nGx}c WAIn6C<2aI4E\\#" 00:13:11.824 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:11.824 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:11.824 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:11.824 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:11.824 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:11.824 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:11.824 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:11.824 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.824 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:11.824 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:11.825 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:11.826 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:12.089 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.090 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.090 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:12.090 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:12.090 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:12.090 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.090 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.090 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ r == \- ]] 00:13:12.090 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'r&.{=%i[!"N{>yC@k*]{29"Ni72=a;OE`1 skF#S' 00:13:12.090 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'r&.{=%i[!"N{>yC@k*]{29"Ni72=a;OE`1 skF#S' nqn.2016-06.io.spdk:cnode12825 00:13:12.090 [2024-11-06 15:25:30.059363] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12825: invalid model number 'r&.{=%i[!"N{>yC@k*]{29"Ni72=a;OE`1 skF#S' 00:13:12.350 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:12.350 { 00:13:12.350 "nqn": "nqn.2016-06.io.spdk:cnode12825", 00:13:12.350 "model_number": "r&.{=%i\u007f[!\"N{>yC@k*]{29\"Ni72=a;OE`1 skF#S", 00:13:12.350 "method": "nvmf_create_subsystem", 00:13:12.350 "req_id": 1 00:13:12.350 } 00:13:12.350 Got JSON-RPC error response 00:13:12.350 response: 00:13:12.350 { 00:13:12.350 "code": -32602, 00:13:12.350 "message": "Invalid MN r&.{=%i\u007f[!\"N{>yC@k*]{29\"Ni72=a;OE`1 skF#S" 00:13:12.350 }' 00:13:12.350 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:12.350 { 00:13:12.350 "nqn": "nqn.2016-06.io.spdk:cnode12825", 00:13:12.350 "model_number": "r&.{=%i\u007f[!\"N{>yC@k*]{29\"Ni72=a;OE`1 skF#S", 00:13:12.350 "method": "nvmf_create_subsystem", 00:13:12.350 "req_id": 1 00:13:12.350 } 00:13:12.350 Got JSON-RPC error response 00:13:12.350 response: 00:13:12.350 { 00:13:12.350 "code": -32602, 00:13:12.350 "message": "Invalid MN r&.{=%i\u007f[!\"N{>yC@k*]{29\"Ni72=a;OE`1 skF#S" 00:13:12.350 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:12.350 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:12.350 [2024-11-06 15:25:30.248058] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:12.350 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:12.609 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:12.609 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:12.609 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:12.609 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:12.609 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:12.870 [2024-11-06 15:25:30.633231] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:12.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:12.870 { 00:13:12.870 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:12.870 "listen_address": { 00:13:12.870 "trtype": "tcp", 00:13:12.870 "traddr": "", 00:13:12.870 "trsvcid": "4421" 00:13:12.870 }, 00:13:12.870 "method": "nvmf_subsystem_remove_listener", 00:13:12.870 "req_id": 1 00:13:12.870 } 00:13:12.870 Got JSON-RPC error response 00:13:12.870 response: 00:13:12.870 { 00:13:12.870 "code": -32602, 00:13:12.870 "message": "Invalid parameters" 00:13:12.870 }' 00:13:12.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:12.870 { 00:13:12.870 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:12.870 "listen_address": { 00:13:12.870 "trtype": "tcp", 00:13:12.870 "traddr": "", 00:13:12.870 "trsvcid": "4421" 00:13:12.870 }, 00:13:12.870 "method": "nvmf_subsystem_remove_listener", 00:13:12.870 "req_id": 1 00:13:12.870 } 00:13:12.870 Got JSON-RPC error response 00:13:12.870 response: 00:13:12.870 { 00:13:12.870 "code": -32602, 00:13:12.870 "message": "Invalid parameters" 00:13:12.870 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:12.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11511 -i 0 00:13:12.870 [2024-11-06 15:25:30.817830] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11511: invalid cntlid range [0-65519] 00:13:12.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:12.870 { 00:13:12.870 "nqn": "nqn.2016-06.io.spdk:cnode11511", 00:13:12.870 "min_cntlid": 0, 00:13:12.870 "method": "nvmf_create_subsystem", 00:13:12.870 "req_id": 1 00:13:12.870 } 00:13:12.870 Got JSON-RPC error response 00:13:12.870 response: 00:13:12.870 { 00:13:12.870 "code": -32602, 00:13:12.870 "message": "Invalid cntlid range [0-65519]" 00:13:12.870 }' 00:13:12.870 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:12.870 { 00:13:12.870 "nqn": "nqn.2016-06.io.spdk:cnode11511", 00:13:12.870 "min_cntlid": 0, 00:13:12.870 "method": "nvmf_create_subsystem", 00:13:12.870 "req_id": 1 00:13:12.870 } 00:13:12.870 Got JSON-RPC error response 00:13:12.870 response: 00:13:12.870 { 00:13:12.870 "code": -32602, 00:13:12.870 "message": "Invalid cntlid range [0-65519]" 00:13:12.870 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:13.131 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16683 -i 65520 00:13:13.131 [2024-11-06 15:25:31.002361] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16683: invalid cntlid range [65520-65519] 00:13:13.131 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:13.131 { 00:13:13.131 "nqn": "nqn.2016-06.io.spdk:cnode16683", 00:13:13.131 "min_cntlid": 65520, 00:13:13.131 "method": "nvmf_create_subsystem", 00:13:13.131 "req_id": 1 00:13:13.131 } 00:13:13.131 Got JSON-RPC error response 00:13:13.131 response: 00:13:13.131 { 00:13:13.131 "code": -32602, 00:13:13.131 "message": "Invalid cntlid range [65520-65519]" 00:13:13.131 }' 00:13:13.131 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:13.131 { 00:13:13.131 "nqn": "nqn.2016-06.io.spdk:cnode16683", 00:13:13.131 "min_cntlid": 65520, 00:13:13.131 "method": "nvmf_create_subsystem", 00:13:13.131 "req_id": 1 00:13:13.131 } 00:13:13.131 Got JSON-RPC error response 00:13:13.131 response: 00:13:13.131 { 00:13:13.131 "code": -32602, 00:13:13.131 "message": "Invalid cntlid range [65520-65519]" 00:13:13.131 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:13.131 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5620 -I 0 00:13:13.392 [2024-11-06 15:25:31.186921] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5620: invalid cntlid range [1-0] 00:13:13.392 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:13.392 { 00:13:13.392 "nqn": "nqn.2016-06.io.spdk:cnode5620", 00:13:13.392 "max_cntlid": 0, 00:13:13.392 "method": "nvmf_create_subsystem", 00:13:13.392 "req_id": 1 00:13:13.392 } 00:13:13.392 Got JSON-RPC error response 00:13:13.392 response: 00:13:13.392 { 00:13:13.392 "code": -32602, 00:13:13.392 "message": "Invalid cntlid range [1-0]" 00:13:13.392 }' 00:13:13.392 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:13.392 { 00:13:13.392 "nqn": "nqn.2016-06.io.spdk:cnode5620", 00:13:13.392 "max_cntlid": 0, 00:13:13.392 "method": "nvmf_create_subsystem", 00:13:13.392 "req_id": 1 00:13:13.392 } 00:13:13.392 Got JSON-RPC error response 00:13:13.392 response: 00:13:13.392 { 00:13:13.392 "code": -32602, 00:13:13.392 "message": "Invalid cntlid range [1-0]" 00:13:13.392 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:13.392 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24755 -I 65520 00:13:13.392 [2024-11-06 15:25:31.371502] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24755: invalid cntlid range [1-65520] 00:13:13.654 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:13.654 { 00:13:13.654 "nqn": "nqn.2016-06.io.spdk:cnode24755", 00:13:13.654 "max_cntlid": 65520, 00:13:13.654 "method": "nvmf_create_subsystem", 00:13:13.654 "req_id": 1 00:13:13.654 } 00:13:13.654 Got JSON-RPC error response 00:13:13.654 response: 00:13:13.654 { 00:13:13.654 "code": -32602, 00:13:13.654 "message": "Invalid cntlid range [1-65520]" 00:13:13.654 }' 00:13:13.654 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:13.654 { 00:13:13.654 "nqn": "nqn.2016-06.io.spdk:cnode24755", 00:13:13.654 "max_cntlid": 65520, 00:13:13.654 "method": "nvmf_create_subsystem", 00:13:13.654 "req_id": 1 00:13:13.654 } 00:13:13.654 Got JSON-RPC error response 00:13:13.654 response: 00:13:13.654 { 00:13:13.654 "code": -32602, 00:13:13.654 "message": "Invalid cntlid range [1-65520]" 00:13:13.654 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:13.654 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15390 -i 6 -I 5 00:13:13.654 [2024-11-06 15:25:31.556097] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15390: invalid cntlid range [6-5] 00:13:13.654 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:13.654 { 00:13:13.654 "nqn": "nqn.2016-06.io.spdk:cnode15390", 00:13:13.654 "min_cntlid": 6, 00:13:13.654 "max_cntlid": 5, 00:13:13.654 "method": "nvmf_create_subsystem", 00:13:13.654 "req_id": 1 00:13:13.654 } 00:13:13.654 Got JSON-RPC error response 00:13:13.654 response: 00:13:13.654 { 00:13:13.654 "code": -32602, 00:13:13.654 "message": "Invalid cntlid range [6-5]" 00:13:13.654 }' 00:13:13.654 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:13.654 { 00:13:13.654 "nqn": "nqn.2016-06.io.spdk:cnode15390", 00:13:13.654 "min_cntlid": 6, 00:13:13.654 "max_cntlid": 5, 00:13:13.654 "method": "nvmf_create_subsystem", 00:13:13.654 "req_id": 1 00:13:13.654 } 00:13:13.654 Got JSON-RPC error response 00:13:13.654 response: 00:13:13.654 { 00:13:13.654 "code": -32602, 00:13:13.654 "message": "Invalid cntlid range [6-5]" 00:13:13.654 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:13.654 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:13.915 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:13.915 { 00:13:13.915 "name": "foobar", 00:13:13.915 "method": "nvmf_delete_target", 00:13:13.915 "req_id": 1 00:13:13.915 } 00:13:13.915 Got JSON-RPC error response 00:13:13.915 response: 00:13:13.915 { 00:13:13.915 "code": -32602, 00:13:13.915 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:13.915 }' 00:13:13.915 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:13.915 { 00:13:13.915 "name": "foobar", 00:13:13.915 "method": "nvmf_delete_target", 00:13:13.915 "req_id": 1 00:13:13.915 } 00:13:13.915 Got JSON-RPC error response 00:13:13.915 response: 00:13:13.915 { 00:13:13.915 "code": -32602, 00:13:13.915 "message": "The specified target doesn't exist, cannot delete it." 00:13:13.916 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:13.916 rmmod nvme_tcp 00:13:13.916 rmmod nvme_fabrics 00:13:13.916 rmmod nvme_keyring 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3699084 ']' 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3699084 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 3699084 ']' 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 3699084 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3699084 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3699084' 00:13:13.916 killing process with pid 3699084 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 3699084 00:13:13.916 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 3699084 00:13:14.177 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:14.177 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:14.177 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:14.177 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:14.177 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:14.177 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:14.177 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:14.177 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:14.177 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:14.177 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.177 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.177 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.102 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:16.102 00:13:16.102 real 0m14.396s 00:13:16.102 user 0m21.329s 00:13:16.102 sys 0m6.900s 00:13:16.102 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:16.102 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:16.102 ************************************ 00:13:16.102 END TEST nvmf_invalid 00:13:16.102 ************************************ 00:13:16.102 15:25:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:16.102 15:25:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:16.102 15:25:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:16.102 15:25:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:16.363 ************************************ 00:13:16.363 START TEST nvmf_connect_stress 00:13:16.363 ************************************ 00:13:16.363 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:16.363 * Looking for test storage... 00:13:16.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.363 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:16.363 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:13:16.363 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:16.363 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:16.363 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:16.363 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:16.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.364 --rc genhtml_branch_coverage=1 00:13:16.364 --rc genhtml_function_coverage=1 00:13:16.364 --rc genhtml_legend=1 00:13:16.364 --rc geninfo_all_blocks=1 00:13:16.364 --rc geninfo_unexecuted_blocks=1 00:13:16.364 00:13:16.364 ' 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:16.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.364 --rc genhtml_branch_coverage=1 00:13:16.364 --rc genhtml_function_coverage=1 00:13:16.364 --rc genhtml_legend=1 00:13:16.364 --rc geninfo_all_blocks=1 00:13:16.364 --rc geninfo_unexecuted_blocks=1 00:13:16.364 00:13:16.364 ' 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:16.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.364 --rc genhtml_branch_coverage=1 00:13:16.364 --rc genhtml_function_coverage=1 00:13:16.364 --rc genhtml_legend=1 00:13:16.364 --rc geninfo_all_blocks=1 00:13:16.364 --rc geninfo_unexecuted_blocks=1 00:13:16.364 00:13:16.364 ' 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:16.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.364 --rc genhtml_branch_coverage=1 00:13:16.364 --rc genhtml_function_coverage=1 00:13:16.364 --rc genhtml_legend=1 00:13:16.364 --rc geninfo_all_blocks=1 00:13:16.364 --rc geninfo_unexecuted_blocks=1 00:13:16.364 00:13:16.364 ' 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:16.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:16.364 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.365 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:16.365 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:16.365 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:16.625 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.625 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.625 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.625 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:16.625 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:16.625 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:16.625 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:24.763 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:24.763 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.763 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:24.763 Found net devices under 0000:31:00.0: cvl_0_0 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:24.764 Found net devices under 0000:31:00.1: cvl_0_1 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:24.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:24.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:13:24.764 00:13:24.764 --- 10.0.0.2 ping statistics --- 00:13:24.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.764 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:24.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:24.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:13:24.764 00:13:24.764 --- 10.0.0.1 ping statistics --- 00:13:24.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.764 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3704306 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3704306 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 3704306 ']' 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:24.764 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.764 [2024-11-06 15:25:42.044258] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:13:24.764 [2024-11-06 15:25:42.044322] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.764 [2024-11-06 15:25:42.144672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:24.764 [2024-11-06 15:25:42.195904] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.764 [2024-11-06 15:25:42.195962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.764 [2024-11-06 15:25:42.195971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.764 [2024-11-06 15:25:42.195978] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.764 [2024-11-06 15:25:42.195984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.764 [2024-11-06 15:25:42.197849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.764 [2024-11-06 15:25:42.198176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.764 [2024-11-06 15:25:42.198179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.025 [2024-11-06 15:25:42.911483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.025 [2024-11-06 15:25:42.937046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.025 NULL1 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3704651 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.025 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:25.025 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.025 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.287 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.547 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.547 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:25.547 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.547 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.547 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.808 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.808 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:25.808 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.808 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.808 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.069 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.069 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:26.069 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.069 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.069 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.641 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.641 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:26.641 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.641 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.641 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.901 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.901 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:26.901 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.901 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.901 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.161 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.161 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:27.161 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.161 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.161 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.421 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.421 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:27.421 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.421 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.421 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.992 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.992 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:27.992 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.992 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.992 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.252 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.252 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:28.252 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.252 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.252 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.511 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.511 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:28.511 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.511 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.511 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.771 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.771 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:28.771 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.771 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.771 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.031 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.031 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:29.031 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.031 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.031 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.601 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.601 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:29.601 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.601 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.601 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.861 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.861 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:29.861 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.861 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.861 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.121 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.121 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:30.121 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.121 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.121 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.381 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.381 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:30.381 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.381 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.381 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.641 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.641 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:30.641 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.641 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.641 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.211 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.211 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:31.211 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.211 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.211 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.471 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.471 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:31.472 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.472 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.472 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.732 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.732 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:31.732 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.732 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.732 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.993 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.993 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:31.993 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.993 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.993 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.253 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.253 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:32.253 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.253 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.253 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.824 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.824 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:32.824 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.824 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.824 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.085 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.085 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:33.085 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.085 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.085 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.346 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.346 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:33.346 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.346 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.346 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.606 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.606 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:33.606 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.606 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.606 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.199 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.199 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:34.199 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.199 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.199 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.519 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.519 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:34.519 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.519 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.519 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.825 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.825 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:34.825 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.825 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.825 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.087 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.087 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:35.087 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.087 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.087 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.347 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3704651 00:13:35.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3704651) - No such process 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3704651 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:35.347 rmmod nvme_tcp 00:13:35.347 rmmod nvme_fabrics 00:13:35.347 rmmod nvme_keyring 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3704306 ']' 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3704306 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 3704306 ']' 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 3704306 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3704306 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:35.347 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:35.348 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3704306' 00:13:35.348 killing process with pid 3704306 00:13:35.348 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 3704306 00:13:35.348 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 3704306 00:13:35.607 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:35.607 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:35.607 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:35.607 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:35.607 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:35.607 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:35.607 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:35.607 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:35.607 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:35.607 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.607 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.607 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.517 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:37.517 00:13:37.517 real 0m21.366s 00:13:37.517 user 0m42.039s 00:13:37.517 sys 0m9.496s 00:13:37.517 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:37.517 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.517 ************************************ 00:13:37.517 END TEST nvmf_connect_stress 00:13:37.517 ************************************ 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:37.778 ************************************ 00:13:37.778 START TEST nvmf_fused_ordering 00:13:37.778 ************************************ 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:37.778 * Looking for test storage... 00:13:37.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:37.778 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:37.779 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:37.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.779 --rc genhtml_branch_coverage=1 00:13:37.779 --rc genhtml_function_coverage=1 00:13:37.779 --rc genhtml_legend=1 00:13:37.779 --rc geninfo_all_blocks=1 00:13:37.779 --rc geninfo_unexecuted_blocks=1 00:13:37.779 00:13:37.779 ' 00:13:37.779 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:37.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.779 --rc genhtml_branch_coverage=1 00:13:37.779 --rc genhtml_function_coverage=1 00:13:37.779 --rc genhtml_legend=1 00:13:37.779 --rc geninfo_all_blocks=1 00:13:37.779 --rc geninfo_unexecuted_blocks=1 00:13:37.779 00:13:37.779 ' 00:13:37.779 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:37.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.779 --rc genhtml_branch_coverage=1 00:13:37.779 --rc genhtml_function_coverage=1 00:13:37.779 --rc genhtml_legend=1 00:13:37.779 --rc geninfo_all_blocks=1 00:13:37.779 --rc geninfo_unexecuted_blocks=1 00:13:37.779 00:13:37.779 ' 00:13:37.779 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:37.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.779 --rc genhtml_branch_coverage=1 00:13:37.779 --rc genhtml_function_coverage=1 00:13:37.779 --rc genhtml_legend=1 00:13:37.779 --rc geninfo_all_blocks=1 00:13:37.779 --rc geninfo_unexecuted_blocks=1 00:13:37.779 00:13:37.779 ' 00:13:37.779 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:38.039 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:38.039 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.039 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.039 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.039 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.039 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:38.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:38.040 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:46.177 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:46.177 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:46.177 Found net devices under 0000:31:00.0: cvl_0_0 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.177 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:46.177 Found net devices under 0000:31:00.1: cvl_0_1 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:46.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:13:46.178 00:13:46.178 --- 10.0.0.2 ping statistics --- 00:13:46.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.178 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:46.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:13:46.178 00:13:46.178 --- 10.0.0.1 ping statistics --- 00:13:46.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.178 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3710897 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3710897 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 3710897 ']' 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:46.178 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:46.178 [2024-11-06 15:26:03.503929] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:13:46.178 [2024-11-06 15:26:03.503999] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.178 [2024-11-06 15:26:03.605630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.178 [2024-11-06 15:26:03.656661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.178 [2024-11-06 15:26:03.656713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.178 [2024-11-06 15:26:03.656722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.178 [2024-11-06 15:26:03.656729] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.178 [2024-11-06 15:26:03.656736] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.178 [2024-11-06 15:26:03.657540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:46.439 [2024-11-06 15:26:04.378460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:46.439 [2024-11-06 15:26:04.402752] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:46.439 NULL1 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.439 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:46.700 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.700 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:46.700 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.700 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:46.700 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.700 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:46.700 [2024-11-06 15:26:04.473444] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:13:46.700 [2024-11-06 15:26:04.473486] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3711077 ] 00:13:47.271 Attached to nqn.2016-06.io.spdk:cnode1 00:13:47.271 Namespace ID: 1 size: 1GB 00:13:47.271 fused_ordering(0) 00:13:47.271 fused_ordering(1) 00:13:47.271 fused_ordering(2) 00:13:47.271 fused_ordering(3) 00:13:47.271 fused_ordering(4) 00:13:47.271 fused_ordering(5) 00:13:47.271 fused_ordering(6) 00:13:47.271 fused_ordering(7) 00:13:47.271 fused_ordering(8) 00:13:47.271 fused_ordering(9) 00:13:47.271 fused_ordering(10) 00:13:47.271 fused_ordering(11) 00:13:47.271 fused_ordering(12) 00:13:47.271 fused_ordering(13) 00:13:47.271 fused_ordering(14) 00:13:47.271 fused_ordering(15) 00:13:47.271 fused_ordering(16) 00:13:47.271 fused_ordering(17) 00:13:47.271 fused_ordering(18) 00:13:47.271 fused_ordering(19) 00:13:47.271 fused_ordering(20) 00:13:47.271 fused_ordering(21) 00:13:47.271 fused_ordering(22) 00:13:47.271 fused_ordering(23) 00:13:47.271 fused_ordering(24) 00:13:47.271 fused_ordering(25) 00:13:47.271 fused_ordering(26) 00:13:47.271 fused_ordering(27) 00:13:47.271 fused_ordering(28) 00:13:47.271 fused_ordering(29) 00:13:47.271 fused_ordering(30) 00:13:47.271 fused_ordering(31) 00:13:47.271 fused_ordering(32) 00:13:47.271 fused_ordering(33) 00:13:47.271 fused_ordering(34) 00:13:47.271 fused_ordering(35) 00:13:47.271 fused_ordering(36) 00:13:47.271 fused_ordering(37) 00:13:47.271 fused_ordering(38) 00:13:47.271 fused_ordering(39) 00:13:47.271 fused_ordering(40) 00:13:47.271 fused_ordering(41) 00:13:47.271 fused_ordering(42) 00:13:47.271 fused_ordering(43) 00:13:47.271 fused_ordering(44) 00:13:47.271 fused_ordering(45) 00:13:47.271 fused_ordering(46) 00:13:47.271 fused_ordering(47) 00:13:47.271 fused_ordering(48) 00:13:47.271 fused_ordering(49) 00:13:47.271 fused_ordering(50) 00:13:47.271 fused_ordering(51) 00:13:47.271 fused_ordering(52) 00:13:47.271 fused_ordering(53) 00:13:47.271 fused_ordering(54) 00:13:47.271 fused_ordering(55) 00:13:47.271 fused_ordering(56) 00:13:47.271 fused_ordering(57) 00:13:47.272 fused_ordering(58) 00:13:47.272 fused_ordering(59) 00:13:47.272 fused_ordering(60) 00:13:47.272 fused_ordering(61) 00:13:47.272 fused_ordering(62) 00:13:47.272 fused_ordering(63) 00:13:47.272 fused_ordering(64) 00:13:47.272 fused_ordering(65) 00:13:47.272 fused_ordering(66) 00:13:47.272 fused_ordering(67) 00:13:47.272 fused_ordering(68) 00:13:47.272 fused_ordering(69) 00:13:47.272 fused_ordering(70) 00:13:47.272 fused_ordering(71) 00:13:47.272 fused_ordering(72) 00:13:47.272 fused_ordering(73) 00:13:47.272 fused_ordering(74) 00:13:47.272 fused_ordering(75) 00:13:47.272 fused_ordering(76) 00:13:47.272 fused_ordering(77) 00:13:47.272 fused_ordering(78) 00:13:47.272 fused_ordering(79) 00:13:47.272 fused_ordering(80) 00:13:47.272 fused_ordering(81) 00:13:47.272 fused_ordering(82) 00:13:47.272 fused_ordering(83) 00:13:47.272 fused_ordering(84) 00:13:47.272 fused_ordering(85) 00:13:47.272 fused_ordering(86) 00:13:47.272 fused_ordering(87) 00:13:47.272 fused_ordering(88) 00:13:47.272 fused_ordering(89) 00:13:47.272 fused_ordering(90) 00:13:47.272 fused_ordering(91) 00:13:47.272 fused_ordering(92) 00:13:47.272 fused_ordering(93) 00:13:47.272 fused_ordering(94) 00:13:47.272 fused_ordering(95) 00:13:47.272 fused_ordering(96) 00:13:47.272 fused_ordering(97) 00:13:47.272 fused_ordering(98) 00:13:47.272 fused_ordering(99) 00:13:47.272 fused_ordering(100) 00:13:47.272 fused_ordering(101) 00:13:47.272 fused_ordering(102) 00:13:47.272 fused_ordering(103) 00:13:47.272 fused_ordering(104) 00:13:47.272 fused_ordering(105) 00:13:47.272 fused_ordering(106) 00:13:47.272 fused_ordering(107) 00:13:47.272 fused_ordering(108) 00:13:47.272 fused_ordering(109) 00:13:47.272 fused_ordering(110) 00:13:47.272 fused_ordering(111) 00:13:47.272 fused_ordering(112) 00:13:47.272 fused_ordering(113) 00:13:47.272 fused_ordering(114) 00:13:47.272 fused_ordering(115) 00:13:47.272 fused_ordering(116) 00:13:47.272 fused_ordering(117) 00:13:47.272 fused_ordering(118) 00:13:47.272 fused_ordering(119) 00:13:47.272 fused_ordering(120) 00:13:47.272 fused_ordering(121) 00:13:47.272 fused_ordering(122) 00:13:47.272 fused_ordering(123) 00:13:47.272 fused_ordering(124) 00:13:47.272 fused_ordering(125) 00:13:47.272 fused_ordering(126) 00:13:47.272 fused_ordering(127) 00:13:47.272 fused_ordering(128) 00:13:47.272 fused_ordering(129) 00:13:47.272 fused_ordering(130) 00:13:47.272 fused_ordering(131) 00:13:47.272 fused_ordering(132) 00:13:47.272 fused_ordering(133) 00:13:47.272 fused_ordering(134) 00:13:47.272 fused_ordering(135) 00:13:47.272 fused_ordering(136) 00:13:47.272 fused_ordering(137) 00:13:47.272 fused_ordering(138) 00:13:47.272 fused_ordering(139) 00:13:47.272 fused_ordering(140) 00:13:47.272 fused_ordering(141) 00:13:47.272 fused_ordering(142) 00:13:47.272 fused_ordering(143) 00:13:47.272 fused_ordering(144) 00:13:47.272 fused_ordering(145) 00:13:47.272 fused_ordering(146) 00:13:47.272 fused_ordering(147) 00:13:47.272 fused_ordering(148) 00:13:47.272 fused_ordering(149) 00:13:47.272 fused_ordering(150) 00:13:47.272 fused_ordering(151) 00:13:47.272 fused_ordering(152) 00:13:47.272 fused_ordering(153) 00:13:47.272 fused_ordering(154) 00:13:47.272 fused_ordering(155) 00:13:47.272 fused_ordering(156) 00:13:47.272 fused_ordering(157) 00:13:47.272 fused_ordering(158) 00:13:47.272 fused_ordering(159) 00:13:47.272 fused_ordering(160) 00:13:47.272 fused_ordering(161) 00:13:47.272 fused_ordering(162) 00:13:47.272 fused_ordering(163) 00:13:47.272 fused_ordering(164) 00:13:47.272 fused_ordering(165) 00:13:47.272 fused_ordering(166) 00:13:47.272 fused_ordering(167) 00:13:47.272 fused_ordering(168) 00:13:47.272 fused_ordering(169) 00:13:47.272 fused_ordering(170) 00:13:47.272 fused_ordering(171) 00:13:47.272 fused_ordering(172) 00:13:47.272 fused_ordering(173) 00:13:47.272 fused_ordering(174) 00:13:47.272 fused_ordering(175) 00:13:47.272 fused_ordering(176) 00:13:47.272 fused_ordering(177) 00:13:47.272 fused_ordering(178) 00:13:47.272 fused_ordering(179) 00:13:47.272 fused_ordering(180) 00:13:47.272 fused_ordering(181) 00:13:47.272 fused_ordering(182) 00:13:47.272 fused_ordering(183) 00:13:47.272 fused_ordering(184) 00:13:47.272 fused_ordering(185) 00:13:47.272 fused_ordering(186) 00:13:47.272 fused_ordering(187) 00:13:47.272 fused_ordering(188) 00:13:47.272 fused_ordering(189) 00:13:47.272 fused_ordering(190) 00:13:47.272 fused_ordering(191) 00:13:47.272 fused_ordering(192) 00:13:47.272 fused_ordering(193) 00:13:47.272 fused_ordering(194) 00:13:47.272 fused_ordering(195) 00:13:47.272 fused_ordering(196) 00:13:47.272 fused_ordering(197) 00:13:47.272 fused_ordering(198) 00:13:47.272 fused_ordering(199) 00:13:47.272 fused_ordering(200) 00:13:47.272 fused_ordering(201) 00:13:47.272 fused_ordering(202) 00:13:47.272 fused_ordering(203) 00:13:47.272 fused_ordering(204) 00:13:47.272 fused_ordering(205) 00:13:47.534 fused_ordering(206) 00:13:47.534 fused_ordering(207) 00:13:47.534 fused_ordering(208) 00:13:47.534 fused_ordering(209) 00:13:47.535 fused_ordering(210) 00:13:47.535 fused_ordering(211) 00:13:47.535 fused_ordering(212) 00:13:47.535 fused_ordering(213) 00:13:47.535 fused_ordering(214) 00:13:47.535 fused_ordering(215) 00:13:47.535 fused_ordering(216) 00:13:47.535 fused_ordering(217) 00:13:47.535 fused_ordering(218) 00:13:47.535 fused_ordering(219) 00:13:47.535 fused_ordering(220) 00:13:47.535 fused_ordering(221) 00:13:47.535 fused_ordering(222) 00:13:47.535 fused_ordering(223) 00:13:47.535 fused_ordering(224) 00:13:47.535 fused_ordering(225) 00:13:47.535 fused_ordering(226) 00:13:47.535 fused_ordering(227) 00:13:47.535 fused_ordering(228) 00:13:47.535 fused_ordering(229) 00:13:47.535 fused_ordering(230) 00:13:47.535 fused_ordering(231) 00:13:47.535 fused_ordering(232) 00:13:47.535 fused_ordering(233) 00:13:47.535 fused_ordering(234) 00:13:47.535 fused_ordering(235) 00:13:47.535 fused_ordering(236) 00:13:47.535 fused_ordering(237) 00:13:47.535 fused_ordering(238) 00:13:47.535 fused_ordering(239) 00:13:47.535 fused_ordering(240) 00:13:47.535 fused_ordering(241) 00:13:47.535 fused_ordering(242) 00:13:47.535 fused_ordering(243) 00:13:47.535 fused_ordering(244) 00:13:47.535 fused_ordering(245) 00:13:47.535 fused_ordering(246) 00:13:47.535 fused_ordering(247) 00:13:47.535 fused_ordering(248) 00:13:47.535 fused_ordering(249) 00:13:47.535 fused_ordering(250) 00:13:47.535 fused_ordering(251) 00:13:47.535 fused_ordering(252) 00:13:47.535 fused_ordering(253) 00:13:47.535 fused_ordering(254) 00:13:47.535 fused_ordering(255) 00:13:47.535 fused_ordering(256) 00:13:47.535 fused_ordering(257) 00:13:47.535 fused_ordering(258) 00:13:47.535 fused_ordering(259) 00:13:47.535 fused_ordering(260) 00:13:47.535 fused_ordering(261) 00:13:47.535 fused_ordering(262) 00:13:47.535 fused_ordering(263) 00:13:47.535 fused_ordering(264) 00:13:47.535 fused_ordering(265) 00:13:47.535 fused_ordering(266) 00:13:47.535 fused_ordering(267) 00:13:47.535 fused_ordering(268) 00:13:47.535 fused_ordering(269) 00:13:47.535 fused_ordering(270) 00:13:47.535 fused_ordering(271) 00:13:47.535 fused_ordering(272) 00:13:47.535 fused_ordering(273) 00:13:47.535 fused_ordering(274) 00:13:47.535 fused_ordering(275) 00:13:47.535 fused_ordering(276) 00:13:47.535 fused_ordering(277) 00:13:47.535 fused_ordering(278) 00:13:47.535 fused_ordering(279) 00:13:47.535 fused_ordering(280) 00:13:47.535 fused_ordering(281) 00:13:47.535 fused_ordering(282) 00:13:47.535 fused_ordering(283) 00:13:47.535 fused_ordering(284) 00:13:47.535 fused_ordering(285) 00:13:47.535 fused_ordering(286) 00:13:47.535 fused_ordering(287) 00:13:47.535 fused_ordering(288) 00:13:47.535 fused_ordering(289) 00:13:47.535 fused_ordering(290) 00:13:47.535 fused_ordering(291) 00:13:47.535 fused_ordering(292) 00:13:47.535 fused_ordering(293) 00:13:47.535 fused_ordering(294) 00:13:47.535 fused_ordering(295) 00:13:47.535 fused_ordering(296) 00:13:47.535 fused_ordering(297) 00:13:47.535 fused_ordering(298) 00:13:47.535 fused_ordering(299) 00:13:47.535 fused_ordering(300) 00:13:47.535 fused_ordering(301) 00:13:47.535 fused_ordering(302) 00:13:47.535 fused_ordering(303) 00:13:47.535 fused_ordering(304) 00:13:47.535 fused_ordering(305) 00:13:47.535 fused_ordering(306) 00:13:47.535 fused_ordering(307) 00:13:47.535 fused_ordering(308) 00:13:47.535 fused_ordering(309) 00:13:47.535 fused_ordering(310) 00:13:47.535 fused_ordering(311) 00:13:47.535 fused_ordering(312) 00:13:47.535 fused_ordering(313) 00:13:47.535 fused_ordering(314) 00:13:47.535 fused_ordering(315) 00:13:47.535 fused_ordering(316) 00:13:47.535 fused_ordering(317) 00:13:47.535 fused_ordering(318) 00:13:47.535 fused_ordering(319) 00:13:47.535 fused_ordering(320) 00:13:47.535 fused_ordering(321) 00:13:47.535 fused_ordering(322) 00:13:47.535 fused_ordering(323) 00:13:47.535 fused_ordering(324) 00:13:47.535 fused_ordering(325) 00:13:47.535 fused_ordering(326) 00:13:47.535 fused_ordering(327) 00:13:47.535 fused_ordering(328) 00:13:47.535 fused_ordering(329) 00:13:47.535 fused_ordering(330) 00:13:47.535 fused_ordering(331) 00:13:47.535 fused_ordering(332) 00:13:47.535 fused_ordering(333) 00:13:47.535 fused_ordering(334) 00:13:47.535 fused_ordering(335) 00:13:47.535 fused_ordering(336) 00:13:47.535 fused_ordering(337) 00:13:47.535 fused_ordering(338) 00:13:47.535 fused_ordering(339) 00:13:47.535 fused_ordering(340) 00:13:47.535 fused_ordering(341) 00:13:47.535 fused_ordering(342) 00:13:47.535 fused_ordering(343) 00:13:47.535 fused_ordering(344) 00:13:47.535 fused_ordering(345) 00:13:47.535 fused_ordering(346) 00:13:47.535 fused_ordering(347) 00:13:47.535 fused_ordering(348) 00:13:47.535 fused_ordering(349) 00:13:47.535 fused_ordering(350) 00:13:47.535 fused_ordering(351) 00:13:47.535 fused_ordering(352) 00:13:47.535 fused_ordering(353) 00:13:47.535 fused_ordering(354) 00:13:47.535 fused_ordering(355) 00:13:47.535 fused_ordering(356) 00:13:47.535 fused_ordering(357) 00:13:47.535 fused_ordering(358) 00:13:47.535 fused_ordering(359) 00:13:47.535 fused_ordering(360) 00:13:47.535 fused_ordering(361) 00:13:47.535 fused_ordering(362) 00:13:47.535 fused_ordering(363) 00:13:47.535 fused_ordering(364) 00:13:47.535 fused_ordering(365) 00:13:47.535 fused_ordering(366) 00:13:47.535 fused_ordering(367) 00:13:47.535 fused_ordering(368) 00:13:47.535 fused_ordering(369) 00:13:47.535 fused_ordering(370) 00:13:47.535 fused_ordering(371) 00:13:47.535 fused_ordering(372) 00:13:47.535 fused_ordering(373) 00:13:47.535 fused_ordering(374) 00:13:47.535 fused_ordering(375) 00:13:47.535 fused_ordering(376) 00:13:47.535 fused_ordering(377) 00:13:47.535 fused_ordering(378) 00:13:47.535 fused_ordering(379) 00:13:47.535 fused_ordering(380) 00:13:47.535 fused_ordering(381) 00:13:47.535 fused_ordering(382) 00:13:47.535 fused_ordering(383) 00:13:47.535 fused_ordering(384) 00:13:47.535 fused_ordering(385) 00:13:47.535 fused_ordering(386) 00:13:47.535 fused_ordering(387) 00:13:47.535 fused_ordering(388) 00:13:47.535 fused_ordering(389) 00:13:47.535 fused_ordering(390) 00:13:47.535 fused_ordering(391) 00:13:47.535 fused_ordering(392) 00:13:47.535 fused_ordering(393) 00:13:47.535 fused_ordering(394) 00:13:47.535 fused_ordering(395) 00:13:47.535 fused_ordering(396) 00:13:47.535 fused_ordering(397) 00:13:47.535 fused_ordering(398) 00:13:47.535 fused_ordering(399) 00:13:47.535 fused_ordering(400) 00:13:47.535 fused_ordering(401) 00:13:47.535 fused_ordering(402) 00:13:47.535 fused_ordering(403) 00:13:47.535 fused_ordering(404) 00:13:47.535 fused_ordering(405) 00:13:47.535 fused_ordering(406) 00:13:47.535 fused_ordering(407) 00:13:47.535 fused_ordering(408) 00:13:47.535 fused_ordering(409) 00:13:47.535 fused_ordering(410) 00:13:47.796 fused_ordering(411) 00:13:47.796 fused_ordering(412) 00:13:47.796 fused_ordering(413) 00:13:47.796 fused_ordering(414) 00:13:47.796 fused_ordering(415) 00:13:47.796 fused_ordering(416) 00:13:47.796 fused_ordering(417) 00:13:47.796 fused_ordering(418) 00:13:47.796 fused_ordering(419) 00:13:47.796 fused_ordering(420) 00:13:47.796 fused_ordering(421) 00:13:47.796 fused_ordering(422) 00:13:47.796 fused_ordering(423) 00:13:47.796 fused_ordering(424) 00:13:47.796 fused_ordering(425) 00:13:47.796 fused_ordering(426) 00:13:47.796 fused_ordering(427) 00:13:47.796 fused_ordering(428) 00:13:47.796 fused_ordering(429) 00:13:47.796 fused_ordering(430) 00:13:47.796 fused_ordering(431) 00:13:47.796 fused_ordering(432) 00:13:47.796 fused_ordering(433) 00:13:47.796 fused_ordering(434) 00:13:47.796 fused_ordering(435) 00:13:47.796 fused_ordering(436) 00:13:47.796 fused_ordering(437) 00:13:47.796 fused_ordering(438) 00:13:47.796 fused_ordering(439) 00:13:47.796 fused_ordering(440) 00:13:47.797 fused_ordering(441) 00:13:47.797 fused_ordering(442) 00:13:47.797 fused_ordering(443) 00:13:47.797 fused_ordering(444) 00:13:47.797 fused_ordering(445) 00:13:47.797 fused_ordering(446) 00:13:47.797 fused_ordering(447) 00:13:47.797 fused_ordering(448) 00:13:47.797 fused_ordering(449) 00:13:47.797 fused_ordering(450) 00:13:47.797 fused_ordering(451) 00:13:47.797 fused_ordering(452) 00:13:47.797 fused_ordering(453) 00:13:47.797 fused_ordering(454) 00:13:47.797 fused_ordering(455) 00:13:47.797 fused_ordering(456) 00:13:47.797 fused_ordering(457) 00:13:47.797 fused_ordering(458) 00:13:47.797 fused_ordering(459) 00:13:47.797 fused_ordering(460) 00:13:47.797 fused_ordering(461) 00:13:47.797 fused_ordering(462) 00:13:47.797 fused_ordering(463) 00:13:47.797 fused_ordering(464) 00:13:47.797 fused_ordering(465) 00:13:47.797 fused_ordering(466) 00:13:47.797 fused_ordering(467) 00:13:47.797 fused_ordering(468) 00:13:47.797 fused_ordering(469) 00:13:47.797 fused_ordering(470) 00:13:47.797 fused_ordering(471) 00:13:47.797 fused_ordering(472) 00:13:47.797 fused_ordering(473) 00:13:47.797 fused_ordering(474) 00:13:47.797 fused_ordering(475) 00:13:47.797 fused_ordering(476) 00:13:47.797 fused_ordering(477) 00:13:47.797 fused_ordering(478) 00:13:47.797 fused_ordering(479) 00:13:47.797 fused_ordering(480) 00:13:47.797 fused_ordering(481) 00:13:47.797 fused_ordering(482) 00:13:47.797 fused_ordering(483) 00:13:47.797 fused_ordering(484) 00:13:47.797 fused_ordering(485) 00:13:47.797 fused_ordering(486) 00:13:47.797 fused_ordering(487) 00:13:47.797 fused_ordering(488) 00:13:47.797 fused_ordering(489) 00:13:47.797 fused_ordering(490) 00:13:47.797 fused_ordering(491) 00:13:47.797 fused_ordering(492) 00:13:47.797 fused_ordering(493) 00:13:47.797 fused_ordering(494) 00:13:47.797 fused_ordering(495) 00:13:47.797 fused_ordering(496) 00:13:47.797 fused_ordering(497) 00:13:47.797 fused_ordering(498) 00:13:47.797 fused_ordering(499) 00:13:47.797 fused_ordering(500) 00:13:47.797 fused_ordering(501) 00:13:47.797 fused_ordering(502) 00:13:47.797 fused_ordering(503) 00:13:47.797 fused_ordering(504) 00:13:47.797 fused_ordering(505) 00:13:47.797 fused_ordering(506) 00:13:47.797 fused_ordering(507) 00:13:47.797 fused_ordering(508) 00:13:47.797 fused_ordering(509) 00:13:47.797 fused_ordering(510) 00:13:47.797 fused_ordering(511) 00:13:47.797 fused_ordering(512) 00:13:47.797 fused_ordering(513) 00:13:47.797 fused_ordering(514) 00:13:47.797 fused_ordering(515) 00:13:47.797 fused_ordering(516) 00:13:47.797 fused_ordering(517) 00:13:47.797 fused_ordering(518) 00:13:47.797 fused_ordering(519) 00:13:47.797 fused_ordering(520) 00:13:47.797 fused_ordering(521) 00:13:47.797 fused_ordering(522) 00:13:47.797 fused_ordering(523) 00:13:47.797 fused_ordering(524) 00:13:47.797 fused_ordering(525) 00:13:47.797 fused_ordering(526) 00:13:47.797 fused_ordering(527) 00:13:47.797 fused_ordering(528) 00:13:47.797 fused_ordering(529) 00:13:47.797 fused_ordering(530) 00:13:47.797 fused_ordering(531) 00:13:47.797 fused_ordering(532) 00:13:47.797 fused_ordering(533) 00:13:47.797 fused_ordering(534) 00:13:47.797 fused_ordering(535) 00:13:47.797 fused_ordering(536) 00:13:47.797 fused_ordering(537) 00:13:47.797 fused_ordering(538) 00:13:47.797 fused_ordering(539) 00:13:47.797 fused_ordering(540) 00:13:47.797 fused_ordering(541) 00:13:47.797 fused_ordering(542) 00:13:47.797 fused_ordering(543) 00:13:47.797 fused_ordering(544) 00:13:47.797 fused_ordering(545) 00:13:47.797 fused_ordering(546) 00:13:47.797 fused_ordering(547) 00:13:47.797 fused_ordering(548) 00:13:47.797 fused_ordering(549) 00:13:47.797 fused_ordering(550) 00:13:47.797 fused_ordering(551) 00:13:47.797 fused_ordering(552) 00:13:47.797 fused_ordering(553) 00:13:47.797 fused_ordering(554) 00:13:47.797 fused_ordering(555) 00:13:47.797 fused_ordering(556) 00:13:47.797 fused_ordering(557) 00:13:47.797 fused_ordering(558) 00:13:47.797 fused_ordering(559) 00:13:47.797 fused_ordering(560) 00:13:47.797 fused_ordering(561) 00:13:47.797 fused_ordering(562) 00:13:47.797 fused_ordering(563) 00:13:47.797 fused_ordering(564) 00:13:47.797 fused_ordering(565) 00:13:47.797 fused_ordering(566) 00:13:47.797 fused_ordering(567) 00:13:47.797 fused_ordering(568) 00:13:47.797 fused_ordering(569) 00:13:47.797 fused_ordering(570) 00:13:47.797 fused_ordering(571) 00:13:47.797 fused_ordering(572) 00:13:47.797 fused_ordering(573) 00:13:47.797 fused_ordering(574) 00:13:47.797 fused_ordering(575) 00:13:47.797 fused_ordering(576) 00:13:47.797 fused_ordering(577) 00:13:47.797 fused_ordering(578) 00:13:47.797 fused_ordering(579) 00:13:47.797 fused_ordering(580) 00:13:47.797 fused_ordering(581) 00:13:47.797 fused_ordering(582) 00:13:47.797 fused_ordering(583) 00:13:47.797 fused_ordering(584) 00:13:47.797 fused_ordering(585) 00:13:47.797 fused_ordering(586) 00:13:47.797 fused_ordering(587) 00:13:47.797 fused_ordering(588) 00:13:47.797 fused_ordering(589) 00:13:47.797 fused_ordering(590) 00:13:47.797 fused_ordering(591) 00:13:47.797 fused_ordering(592) 00:13:47.797 fused_ordering(593) 00:13:47.797 fused_ordering(594) 00:13:47.797 fused_ordering(595) 00:13:47.797 fused_ordering(596) 00:13:47.797 fused_ordering(597) 00:13:47.797 fused_ordering(598) 00:13:47.797 fused_ordering(599) 00:13:47.797 fused_ordering(600) 00:13:47.797 fused_ordering(601) 00:13:47.797 fused_ordering(602) 00:13:47.797 fused_ordering(603) 00:13:47.797 fused_ordering(604) 00:13:47.797 fused_ordering(605) 00:13:47.797 fused_ordering(606) 00:13:47.797 fused_ordering(607) 00:13:47.797 fused_ordering(608) 00:13:47.797 fused_ordering(609) 00:13:47.797 fused_ordering(610) 00:13:47.797 fused_ordering(611) 00:13:47.797 fused_ordering(612) 00:13:47.797 fused_ordering(613) 00:13:47.797 fused_ordering(614) 00:13:47.797 fused_ordering(615) 00:13:48.369 fused_ordering(616) 00:13:48.369 fused_ordering(617) 00:13:48.369 fused_ordering(618) 00:13:48.369 fused_ordering(619) 00:13:48.369 fused_ordering(620) 00:13:48.369 fused_ordering(621) 00:13:48.369 fused_ordering(622) 00:13:48.369 fused_ordering(623) 00:13:48.369 fused_ordering(624) 00:13:48.369 fused_ordering(625) 00:13:48.369 fused_ordering(626) 00:13:48.369 fused_ordering(627) 00:13:48.369 fused_ordering(628) 00:13:48.369 fused_ordering(629) 00:13:48.369 fused_ordering(630) 00:13:48.369 fused_ordering(631) 00:13:48.369 fused_ordering(632) 00:13:48.369 fused_ordering(633) 00:13:48.369 fused_ordering(634) 00:13:48.369 fused_ordering(635) 00:13:48.369 fused_ordering(636) 00:13:48.369 fused_ordering(637) 00:13:48.369 fused_ordering(638) 00:13:48.369 fused_ordering(639) 00:13:48.369 fused_ordering(640) 00:13:48.369 fused_ordering(641) 00:13:48.369 fused_ordering(642) 00:13:48.369 fused_ordering(643) 00:13:48.369 fused_ordering(644) 00:13:48.369 fused_ordering(645) 00:13:48.369 fused_ordering(646) 00:13:48.369 fused_ordering(647) 00:13:48.369 fused_ordering(648) 00:13:48.369 fused_ordering(649) 00:13:48.369 fused_ordering(650) 00:13:48.369 fused_ordering(651) 00:13:48.369 fused_ordering(652) 00:13:48.369 fused_ordering(653) 00:13:48.369 fused_ordering(654) 00:13:48.369 fused_ordering(655) 00:13:48.369 fused_ordering(656) 00:13:48.369 fused_ordering(657) 00:13:48.369 fused_ordering(658) 00:13:48.369 fused_ordering(659) 00:13:48.369 fused_ordering(660) 00:13:48.369 fused_ordering(661) 00:13:48.369 fused_ordering(662) 00:13:48.369 fused_ordering(663) 00:13:48.369 fused_ordering(664) 00:13:48.369 fused_ordering(665) 00:13:48.369 fused_ordering(666) 00:13:48.369 fused_ordering(667) 00:13:48.369 fused_ordering(668) 00:13:48.369 fused_ordering(669) 00:13:48.369 fused_ordering(670) 00:13:48.369 fused_ordering(671) 00:13:48.369 fused_ordering(672) 00:13:48.369 fused_ordering(673) 00:13:48.369 fused_ordering(674) 00:13:48.369 fused_ordering(675) 00:13:48.369 fused_ordering(676) 00:13:48.369 fused_ordering(677) 00:13:48.369 fused_ordering(678) 00:13:48.369 fused_ordering(679) 00:13:48.369 fused_ordering(680) 00:13:48.369 fused_ordering(681) 00:13:48.369 fused_ordering(682) 00:13:48.369 fused_ordering(683) 00:13:48.369 fused_ordering(684) 00:13:48.369 fused_ordering(685) 00:13:48.369 fused_ordering(686) 00:13:48.369 fused_ordering(687) 00:13:48.369 fused_ordering(688) 00:13:48.369 fused_ordering(689) 00:13:48.369 fused_ordering(690) 00:13:48.369 fused_ordering(691) 00:13:48.369 fused_ordering(692) 00:13:48.369 fused_ordering(693) 00:13:48.369 fused_ordering(694) 00:13:48.369 fused_ordering(695) 00:13:48.369 fused_ordering(696) 00:13:48.369 fused_ordering(697) 00:13:48.369 fused_ordering(698) 00:13:48.369 fused_ordering(699) 00:13:48.369 fused_ordering(700) 00:13:48.369 fused_ordering(701) 00:13:48.369 fused_ordering(702) 00:13:48.369 fused_ordering(703) 00:13:48.369 fused_ordering(704) 00:13:48.369 fused_ordering(705) 00:13:48.369 fused_ordering(706) 00:13:48.369 fused_ordering(707) 00:13:48.369 fused_ordering(708) 00:13:48.369 fused_ordering(709) 00:13:48.369 fused_ordering(710) 00:13:48.369 fused_ordering(711) 00:13:48.369 fused_ordering(712) 00:13:48.369 fused_ordering(713) 00:13:48.369 fused_ordering(714) 00:13:48.369 fused_ordering(715) 00:13:48.369 fused_ordering(716) 00:13:48.369 fused_ordering(717) 00:13:48.369 fused_ordering(718) 00:13:48.369 fused_ordering(719) 00:13:48.369 fused_ordering(720) 00:13:48.369 fused_ordering(721) 00:13:48.369 fused_ordering(722) 00:13:48.369 fused_ordering(723) 00:13:48.369 fused_ordering(724) 00:13:48.369 fused_ordering(725) 00:13:48.369 fused_ordering(726) 00:13:48.369 fused_ordering(727) 00:13:48.369 fused_ordering(728) 00:13:48.369 fused_ordering(729) 00:13:48.369 fused_ordering(730) 00:13:48.369 fused_ordering(731) 00:13:48.369 fused_ordering(732) 00:13:48.369 fused_ordering(733) 00:13:48.369 fused_ordering(734) 00:13:48.369 fused_ordering(735) 00:13:48.369 fused_ordering(736) 00:13:48.369 fused_ordering(737) 00:13:48.369 fused_ordering(738) 00:13:48.369 fused_ordering(739) 00:13:48.369 fused_ordering(740) 00:13:48.369 fused_ordering(741) 00:13:48.369 fused_ordering(742) 00:13:48.369 fused_ordering(743) 00:13:48.369 fused_ordering(744) 00:13:48.369 fused_ordering(745) 00:13:48.369 fused_ordering(746) 00:13:48.369 fused_ordering(747) 00:13:48.369 fused_ordering(748) 00:13:48.369 fused_ordering(749) 00:13:48.369 fused_ordering(750) 00:13:48.369 fused_ordering(751) 00:13:48.369 fused_ordering(752) 00:13:48.369 fused_ordering(753) 00:13:48.369 fused_ordering(754) 00:13:48.369 fused_ordering(755) 00:13:48.369 fused_ordering(756) 00:13:48.369 fused_ordering(757) 00:13:48.369 fused_ordering(758) 00:13:48.369 fused_ordering(759) 00:13:48.369 fused_ordering(760) 00:13:48.369 fused_ordering(761) 00:13:48.369 fused_ordering(762) 00:13:48.369 fused_ordering(763) 00:13:48.369 fused_ordering(764) 00:13:48.369 fused_ordering(765) 00:13:48.369 fused_ordering(766) 00:13:48.369 fused_ordering(767) 00:13:48.369 fused_ordering(768) 00:13:48.369 fused_ordering(769) 00:13:48.369 fused_ordering(770) 00:13:48.369 fused_ordering(771) 00:13:48.369 fused_ordering(772) 00:13:48.369 fused_ordering(773) 00:13:48.369 fused_ordering(774) 00:13:48.369 fused_ordering(775) 00:13:48.369 fused_ordering(776) 00:13:48.369 fused_ordering(777) 00:13:48.369 fused_ordering(778) 00:13:48.369 fused_ordering(779) 00:13:48.369 fused_ordering(780) 00:13:48.369 fused_ordering(781) 00:13:48.369 fused_ordering(782) 00:13:48.369 fused_ordering(783) 00:13:48.369 fused_ordering(784) 00:13:48.369 fused_ordering(785) 00:13:48.369 fused_ordering(786) 00:13:48.369 fused_ordering(787) 00:13:48.369 fused_ordering(788) 00:13:48.369 fused_ordering(789) 00:13:48.369 fused_ordering(790) 00:13:48.369 fused_ordering(791) 00:13:48.369 fused_ordering(792) 00:13:48.369 fused_ordering(793) 00:13:48.369 fused_ordering(794) 00:13:48.369 fused_ordering(795) 00:13:48.369 fused_ordering(796) 00:13:48.369 fused_ordering(797) 00:13:48.369 fused_ordering(798) 00:13:48.369 fused_ordering(799) 00:13:48.369 fused_ordering(800) 00:13:48.369 fused_ordering(801) 00:13:48.369 fused_ordering(802) 00:13:48.369 fused_ordering(803) 00:13:48.369 fused_ordering(804) 00:13:48.369 fused_ordering(805) 00:13:48.369 fused_ordering(806) 00:13:48.369 fused_ordering(807) 00:13:48.369 fused_ordering(808) 00:13:48.369 fused_ordering(809) 00:13:48.369 fused_ordering(810) 00:13:48.369 fused_ordering(811) 00:13:48.369 fused_ordering(812) 00:13:48.369 fused_ordering(813) 00:13:48.369 fused_ordering(814) 00:13:48.369 fused_ordering(815) 00:13:48.369 fused_ordering(816) 00:13:48.369 fused_ordering(817) 00:13:48.369 fused_ordering(818) 00:13:48.369 fused_ordering(819) 00:13:48.369 fused_ordering(820) 00:13:49.311 fused_ordering(821) 00:13:49.311 fused_ordering(822) 00:13:49.311 fused_ordering(823) 00:13:49.311 fused_ordering(824) 00:13:49.311 fused_ordering(825) 00:13:49.311 fused_ordering(826) 00:13:49.311 fused_ordering(827) 00:13:49.311 fused_ordering(828) 00:13:49.311 fused_ordering(829) 00:13:49.311 fused_ordering(830) 00:13:49.311 fused_ordering(831) 00:13:49.311 fused_ordering(832) 00:13:49.311 fused_ordering(833) 00:13:49.311 fused_ordering(834) 00:13:49.311 fused_ordering(835) 00:13:49.311 fused_ordering(836) 00:13:49.311 fused_ordering(837) 00:13:49.311 fused_ordering(838) 00:13:49.311 fused_ordering(839) 00:13:49.311 fused_ordering(840) 00:13:49.311 fused_ordering(841) 00:13:49.311 fused_ordering(842) 00:13:49.311 fused_ordering(843) 00:13:49.311 fused_ordering(844) 00:13:49.311 fused_ordering(845) 00:13:49.311 fused_ordering(846) 00:13:49.311 fused_ordering(847) 00:13:49.311 fused_ordering(848) 00:13:49.311 fused_ordering(849) 00:13:49.311 fused_ordering(850) 00:13:49.311 fused_ordering(851) 00:13:49.311 fused_ordering(852) 00:13:49.311 fused_ordering(853) 00:13:49.311 fused_ordering(854) 00:13:49.311 fused_ordering(855) 00:13:49.311 fused_ordering(856) 00:13:49.311 fused_ordering(857) 00:13:49.311 fused_ordering(858) 00:13:49.311 fused_ordering(859) 00:13:49.311 fused_ordering(860) 00:13:49.311 fused_ordering(861) 00:13:49.311 fused_ordering(862) 00:13:49.311 fused_ordering(863) 00:13:49.311 fused_ordering(864) 00:13:49.311 fused_ordering(865) 00:13:49.311 fused_ordering(866) 00:13:49.311 fused_ordering(867) 00:13:49.311 fused_ordering(868) 00:13:49.311 fused_ordering(869) 00:13:49.311 fused_ordering(870) 00:13:49.311 fused_ordering(871) 00:13:49.311 fused_ordering(872) 00:13:49.311 fused_ordering(873) 00:13:49.311 fused_ordering(874) 00:13:49.311 fused_ordering(875) 00:13:49.311 fused_ordering(876) 00:13:49.311 fused_ordering(877) 00:13:49.311 fused_ordering(878) 00:13:49.311 fused_ordering(879) 00:13:49.311 fused_ordering(880) 00:13:49.311 fused_ordering(881) 00:13:49.311 fused_ordering(882) 00:13:49.311 fused_ordering(883) 00:13:49.311 fused_ordering(884) 00:13:49.311 fused_ordering(885) 00:13:49.311 fused_ordering(886) 00:13:49.311 fused_ordering(887) 00:13:49.311 fused_ordering(888) 00:13:49.311 fused_ordering(889) 00:13:49.311 fused_ordering(890) 00:13:49.311 fused_ordering(891) 00:13:49.311 fused_ordering(892) 00:13:49.311 fused_ordering(893) 00:13:49.311 fused_ordering(894) 00:13:49.311 fused_ordering(895) 00:13:49.311 fused_ordering(896) 00:13:49.311 fused_ordering(897) 00:13:49.311 fused_ordering(898) 00:13:49.311 fused_ordering(899) 00:13:49.311 fused_ordering(900) 00:13:49.311 fused_ordering(901) 00:13:49.311 fused_ordering(902) 00:13:49.311 fused_ordering(903) 00:13:49.311 fused_ordering(904) 00:13:49.311 fused_ordering(905) 00:13:49.311 fused_ordering(906) 00:13:49.311 fused_ordering(907) 00:13:49.311 fused_ordering(908) 00:13:49.311 fused_ordering(909) 00:13:49.311 fused_ordering(910) 00:13:49.311 fused_ordering(911) 00:13:49.311 fused_ordering(912) 00:13:49.311 fused_ordering(913) 00:13:49.311 fused_ordering(914) 00:13:49.311 fused_ordering(915) 00:13:49.311 fused_ordering(916) 00:13:49.311 fused_ordering(917) 00:13:49.311 fused_ordering(918) 00:13:49.311 fused_ordering(919) 00:13:49.311 fused_ordering(920) 00:13:49.311 fused_ordering(921) 00:13:49.311 fused_ordering(922) 00:13:49.311 fused_ordering(923) 00:13:49.311 fused_ordering(924) 00:13:49.311 fused_ordering(925) 00:13:49.311 fused_ordering(926) 00:13:49.311 fused_ordering(927) 00:13:49.311 fused_ordering(928) 00:13:49.311 fused_ordering(929) 00:13:49.311 fused_ordering(930) 00:13:49.311 fused_ordering(931) 00:13:49.311 fused_ordering(932) 00:13:49.311 fused_ordering(933) 00:13:49.311 fused_ordering(934) 00:13:49.311 fused_ordering(935) 00:13:49.311 fused_ordering(936) 00:13:49.311 fused_ordering(937) 00:13:49.311 fused_ordering(938) 00:13:49.311 fused_ordering(939) 00:13:49.311 fused_ordering(940) 00:13:49.311 fused_ordering(941) 00:13:49.311 fused_ordering(942) 00:13:49.311 fused_ordering(943) 00:13:49.311 fused_ordering(944) 00:13:49.311 fused_ordering(945) 00:13:49.311 fused_ordering(946) 00:13:49.311 fused_ordering(947) 00:13:49.311 fused_ordering(948) 00:13:49.311 fused_ordering(949) 00:13:49.311 fused_ordering(950) 00:13:49.311 fused_ordering(951) 00:13:49.311 fused_ordering(952) 00:13:49.311 fused_ordering(953) 00:13:49.311 fused_ordering(954) 00:13:49.311 fused_ordering(955) 00:13:49.311 fused_ordering(956) 00:13:49.311 fused_ordering(957) 00:13:49.311 fused_ordering(958) 00:13:49.311 fused_ordering(959) 00:13:49.311 fused_ordering(960) 00:13:49.311 fused_ordering(961) 00:13:49.311 fused_ordering(962) 00:13:49.312 fused_ordering(963) 00:13:49.312 fused_ordering(964) 00:13:49.312 fused_ordering(965) 00:13:49.312 fused_ordering(966) 00:13:49.312 fused_ordering(967) 00:13:49.312 fused_ordering(968) 00:13:49.312 fused_ordering(969) 00:13:49.312 fused_ordering(970) 00:13:49.312 fused_ordering(971) 00:13:49.312 fused_ordering(972) 00:13:49.312 fused_ordering(973) 00:13:49.312 fused_ordering(974) 00:13:49.312 fused_ordering(975) 00:13:49.312 fused_ordering(976) 00:13:49.312 fused_ordering(977) 00:13:49.312 fused_ordering(978) 00:13:49.312 fused_ordering(979) 00:13:49.312 fused_ordering(980) 00:13:49.312 fused_ordering(981) 00:13:49.312 fused_ordering(982) 00:13:49.312 fused_ordering(983) 00:13:49.312 fused_ordering(984) 00:13:49.312 fused_ordering(985) 00:13:49.312 fused_ordering(986) 00:13:49.312 fused_ordering(987) 00:13:49.312 fused_ordering(988) 00:13:49.312 fused_ordering(989) 00:13:49.312 fused_ordering(990) 00:13:49.312 fused_ordering(991) 00:13:49.312 fused_ordering(992) 00:13:49.312 fused_ordering(993) 00:13:49.312 fused_ordering(994) 00:13:49.312 fused_ordering(995) 00:13:49.312 fused_ordering(996) 00:13:49.312 fused_ordering(997) 00:13:49.312 fused_ordering(998) 00:13:49.312 fused_ordering(999) 00:13:49.312 fused_ordering(1000) 00:13:49.312 fused_ordering(1001) 00:13:49.312 fused_ordering(1002) 00:13:49.312 fused_ordering(1003) 00:13:49.312 fused_ordering(1004) 00:13:49.312 fused_ordering(1005) 00:13:49.312 fused_ordering(1006) 00:13:49.312 fused_ordering(1007) 00:13:49.312 fused_ordering(1008) 00:13:49.312 fused_ordering(1009) 00:13:49.312 fused_ordering(1010) 00:13:49.312 fused_ordering(1011) 00:13:49.312 fused_ordering(1012) 00:13:49.312 fused_ordering(1013) 00:13:49.312 fused_ordering(1014) 00:13:49.312 fused_ordering(1015) 00:13:49.312 fused_ordering(1016) 00:13:49.312 fused_ordering(1017) 00:13:49.312 fused_ordering(1018) 00:13:49.312 fused_ordering(1019) 00:13:49.312 fused_ordering(1020) 00:13:49.312 fused_ordering(1021) 00:13:49.312 fused_ordering(1022) 00:13:49.312 fused_ordering(1023) 00:13:49.312 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:49.312 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:49.312 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:49.312 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:49.312 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:49.312 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:49.312 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:49.312 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:49.312 rmmod nvme_tcp 00:13:49.312 rmmod nvme_fabrics 00:13:49.312 rmmod nvme_keyring 00:13:49.312 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:49.312 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3710897 ']' 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3710897 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 3710897 ']' 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 3710897 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3710897 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3710897' 00:13:49.312 killing process with pid 3710897 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 3710897 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 3710897 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.312 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:51.856 00:13:51.856 real 0m13.701s 00:13:51.856 user 0m7.142s 00:13:51.856 sys 0m7.481s 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:51.856 ************************************ 00:13:51.856 END TEST nvmf_fused_ordering 00:13:51.856 ************************************ 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:51.856 ************************************ 00:13:51.856 START TEST nvmf_ns_masking 00:13:51.856 ************************************ 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:51.856 * Looking for test storage... 00:13:51.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:51.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.856 --rc genhtml_branch_coverage=1 00:13:51.856 --rc genhtml_function_coverage=1 00:13:51.856 --rc genhtml_legend=1 00:13:51.856 --rc geninfo_all_blocks=1 00:13:51.856 --rc geninfo_unexecuted_blocks=1 00:13:51.856 00:13:51.856 ' 00:13:51.856 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:51.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.857 --rc genhtml_branch_coverage=1 00:13:51.857 --rc genhtml_function_coverage=1 00:13:51.857 --rc genhtml_legend=1 00:13:51.857 --rc geninfo_all_blocks=1 00:13:51.857 --rc geninfo_unexecuted_blocks=1 00:13:51.857 00:13:51.857 ' 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:51.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.857 --rc genhtml_branch_coverage=1 00:13:51.857 --rc genhtml_function_coverage=1 00:13:51.857 --rc genhtml_legend=1 00:13:51.857 --rc geninfo_all_blocks=1 00:13:51.857 --rc geninfo_unexecuted_blocks=1 00:13:51.857 00:13:51.857 ' 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:51.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.857 --rc genhtml_branch_coverage=1 00:13:51.857 --rc genhtml_function_coverage=1 00:13:51.857 --rc genhtml_legend=1 00:13:51.857 --rc geninfo_all_blocks=1 00:13:51.857 --rc geninfo_unexecuted_blocks=1 00:13:51.857 00:13:51.857 ' 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:51.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=1a5cd29d-ed88-4714-8c44-accdd45f7028 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=ccc925a0-61c3-4c77-9595-78a01536fdfb 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=d1e8ee47-f112-4d0e-ae43-a1819525fcb5 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:51.857 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:00.035 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:00.035 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:00.035 Found net devices under 0000:31:00.0: cvl_0_0 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:00.035 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:00.036 Found net devices under 0000:31:00.1: cvl_0_1 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:00.036 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:00.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:14:00.036 00:14:00.036 --- 10.0.0.2 ping statistics --- 00:14:00.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.036 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:00.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:14:00.036 00:14:00.036 --- 10.0.0.1 ping statistics --- 00:14:00.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.036 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3715782 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3715782 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3715782 ']' 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:00.036 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.036 [2024-11-06 15:26:17.279132] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:14:00.036 [2024-11-06 15:26:17.279201] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.036 [2024-11-06 15:26:17.379614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.036 [2024-11-06 15:26:17.430370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.036 [2024-11-06 15:26:17.430424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.036 [2024-11-06 15:26:17.430432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.036 [2024-11-06 15:26:17.430440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.036 [2024-11-06 15:26:17.430446] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.036 [2024-11-06 15:26:17.431257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.298 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:00.298 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:00.298 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:00.298 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:00.298 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.298 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.298 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:00.558 [2024-11-06 15:26:18.307329] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.559 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:00.559 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:00.559 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:00.559 Malloc1 00:14:00.819 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:00.820 Malloc2 00:14:00.820 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:01.080 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:01.342 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.603 [2024-11-06 15:26:19.326613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.603 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:01.603 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d1e8ee47-f112-4d0e-ae43-a1819525fcb5 -a 10.0.0.2 -s 4420 -i 4 00:14:01.603 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:01.603 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:01.603 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:01.603 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:01.603 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:03.527 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:03.527 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:03.527 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:03.788 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:03.788 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:03.788 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:03.788 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:03.788 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:03.788 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:03.788 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:03.788 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:03.788 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.788 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:03.788 [ 0]:0x1 00:14:03.788 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:03.788 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.788 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2449219c3a9a4c0594de1880df6e81f0 00:14:03.788 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2449219c3a9a4c0594de1880df6e81f0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.788 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:04.048 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:04.048 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.048 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:04.048 [ 0]:0x1 00:14:04.048 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:04.048 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:04.048 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2449219c3a9a4c0594de1880df6e81f0 00:14:04.048 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2449219c3a9a4c0594de1880df6e81f0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:04.048 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:04.048 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.048 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:04.048 [ 1]:0x2 00:14:04.048 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:04.048 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:04.048 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ebca687c71b64083b99b2ee91f18de83 00:14:04.048 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ebca687c71b64083b99b2ee91f18de83 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:04.048 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:04.048 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:04.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.308 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.568 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:04.568 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:04.568 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d1e8ee47-f112-4d0e-ae43-a1819525fcb5 -a 10.0.0.2 -s 4420 -i 4 00:14:04.828 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:04.828 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:04.828 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.828 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:14:04.828 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:14:04.829 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:06.740 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:06.740 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:06.740 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:06.740 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:06.740 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:06.740 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:06.740 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:06.740 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:07.001 [ 0]:0x2 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ebca687c71b64083b99b2ee91f18de83 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ebca687c71b64083b99b2ee91f18de83 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.001 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:07.262 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:07.262 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.262 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:07.263 [ 0]:0x1 00:14:07.263 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:07.263 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.263 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2449219c3a9a4c0594de1880df6e81f0 00:14:07.263 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2449219c3a9a4c0594de1880df6e81f0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.263 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:07.263 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.263 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:07.263 [ 1]:0x2 00:14:07.263 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:07.263 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.524 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ebca687c71b64083b99b2ee91f18de83 00:14:07.524 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ebca687c71b64083b99b2ee91f18de83 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.524 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:07.524 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:07.524 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:07.524 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:07.524 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:07.524 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.524 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:07.524 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.524 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:07.524 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.524 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:07.524 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:07.524 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.524 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:07.524 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.524 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:07.524 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:07.525 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:07.525 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:07.525 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:07.525 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.525 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:07.525 [ 0]:0x2 00:14:07.785 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:07.785 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.785 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ebca687c71b64083b99b2ee91f18de83 00:14:07.785 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ebca687c71b64083b99b2ee91f18de83 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.785 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:07.785 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:07.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.785 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:08.046 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:08.046 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d1e8ee47-f112-4d0e-ae43-a1819525fcb5 -a 10.0.0.2 -s 4420 -i 4 00:14:08.046 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:08.046 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:08.046 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:08.046 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:08.046 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:08.046 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:09.960 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:10.221 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:10.221 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:10.221 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:10.221 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:10.221 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:10.221 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:10.221 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:10.221 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:10.221 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:10.221 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:10.221 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.221 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:10.221 [ 0]:0x1 00:14:10.221 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:10.221 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.221 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2449219c3a9a4c0594de1880df6e81f0 00:14:10.221 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2449219c3a9a4c0594de1880df6e81f0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.221 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:10.221 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.221 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:10.221 [ 1]:0x2 00:14:10.221 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:10.221 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.221 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ebca687c71b64083b99b2ee91f18de83 00:14:10.221 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ebca687c71b64083b99b2ee91f18de83 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.221 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:10.482 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:10.482 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:10.482 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:10.482 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:10.482 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.482 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:10.482 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.482 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:10.482 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.482 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:10.482 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:10.482 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.482 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:10.482 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.482 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:10.482 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:10.482 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:10.482 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:10.482 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:10.482 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.482 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:10.742 [ 0]:0x2 00:14:10.742 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:10.742 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.742 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ebca687c71b64083b99b2ee91f18de83 00:14:10.742 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ebca687c71b64083b99b2ee91f18de83 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.742 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:10.742 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:10.743 [2024-11-06 15:26:28.679853] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:10.743 request: 00:14:10.743 { 00:14:10.743 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.743 "nsid": 2, 00:14:10.743 "host": "nqn.2016-06.io.spdk:host1", 00:14:10.743 "method": "nvmf_ns_remove_host", 00:14:10.743 "req_id": 1 00:14:10.743 } 00:14:10.743 Got JSON-RPC error response 00:14:10.743 response: 00:14:10.743 { 00:14:10.743 "code": -32602, 00:14:10.743 "message": "Invalid parameters" 00:14:10.743 } 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.743 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:11.004 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:11.004 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.004 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:11.004 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.004 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:11.004 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:11.004 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:11.004 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:11.004 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:11.004 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.004 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:11.004 [ 0]:0x2 00:14:11.004 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:11.004 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.004 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ebca687c71b64083b99b2ee91f18de83 00:14:11.004 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ebca687c71b64083b99b2ee91f18de83 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.004 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:11.004 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:11.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.265 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3718281 00:14:11.265 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:11.265 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.265 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3718281 /var/tmp/host.sock 00:14:11.265 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3718281 ']' 00:14:11.265 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:14:11.265 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:11.265 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:11.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:11.265 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:11.265 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:11.265 [2024-11-06 15:26:29.091070] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:14:11.265 [2024-11-06 15:26:29.091126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718281 ] 00:14:11.265 [2024-11-06 15:26:29.180294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.265 [2024-11-06 15:26:29.215767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.205 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:12.205 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:12.205 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.205 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:12.467 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 1a5cd29d-ed88-4714-8c44-accdd45f7028 00:14:12.468 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:12.468 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 1A5CD29DED8847148C44ACCDD45F7028 -i 00:14:12.468 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid ccc925a0-61c3-4c77-9595-78a01536fdfb 00:14:12.468 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:12.468 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g CCC925A061C34C77959578A01536FDFB -i 00:14:12.728 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:12.989 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:12.989 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:12.989 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:13.249 nvme0n1 00:14:13.249 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:13.249 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:13.821 nvme1n2 00:14:13.821 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:13.821 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:13.821 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:13.821 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:13.821 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:13.821 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:13.821 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:13.821 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:13.821 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:14.082 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 1a5cd29d-ed88-4714-8c44-accdd45f7028 == \1\a\5\c\d\2\9\d\-\e\d\8\8\-\4\7\1\4\-\8\c\4\4\-\a\c\c\d\d\4\5\f\7\0\2\8 ]] 00:14:14.082 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:14.082 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:14.082 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:14.343 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ ccc925a0-61c3-4c77-9595-78a01536fdfb == \c\c\c\9\2\5\a\0\-\6\1\c\3\-\4\c\7\7\-\9\5\9\5\-\7\8\a\0\1\5\3\6\f\d\f\b ]] 00:14:14.343 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.343 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:14.603 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 1a5cd29d-ed88-4714-8c44-accdd45f7028 00:14:14.603 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:14.603 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 1A5CD29DED8847148C44ACCDD45F7028 00:14:14.603 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:14.603 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 1A5CD29DED8847148C44ACCDD45F7028 00:14:14.603 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:14.603 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.603 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:14.603 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.603 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:14.603 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.603 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:14.603 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:14.603 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 1A5CD29DED8847148C44ACCDD45F7028 00:14:14.865 [2024-11-06 15:26:32.610215] bdev.c:8613:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:14.865 [2024-11-06 15:26:32.610244] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:14.865 [2024-11-06 15:26:32.610251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.865 request: 00:14:14.865 { 00:14:14.865 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:14.865 "namespace": { 00:14:14.865 "bdev_name": "invalid", 00:14:14.865 "nsid": 1, 00:14:14.865 "nguid": "1A5CD29DED8847148C44ACCDD45F7028", 00:14:14.865 "no_auto_visible": false, 00:14:14.865 "no_metadata": false 00:14:14.865 }, 00:14:14.865 "method": "nvmf_subsystem_add_ns", 00:14:14.865 "req_id": 1 00:14:14.865 } 00:14:14.865 Got JSON-RPC error response 00:14:14.865 response: 00:14:14.865 { 00:14:14.865 "code": -32602, 00:14:14.865 "message": "Invalid parameters" 00:14:14.865 } 00:14:14.865 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:14.865 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:14.865 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:14.865 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:14.865 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 1a5cd29d-ed88-4714-8c44-accdd45f7028 00:14:14.865 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:14.865 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 1A5CD29DED8847148C44ACCDD45F7028 -i 00:14:14.865 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:17.410 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:17.410 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:17.410 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:17.410 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:17.410 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3718281 00:14:17.410 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3718281 ']' 00:14:17.410 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3718281 00:14:17.410 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:17.410 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:17.410 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3718281 00:14:17.410 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:17.410 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:17.410 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3718281' 00:14:17.410 killing process with pid 3718281 00:14:17.410 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3718281 00:14:17.410 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3718281 00:14:17.410 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:17.670 rmmod nvme_tcp 00:14:17.670 rmmod nvme_fabrics 00:14:17.670 rmmod nvme_keyring 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3715782 ']' 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3715782 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3715782 ']' 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3715782 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3715782 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3715782' 00:14:17.670 killing process with pid 3715782 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3715782 00:14:17.670 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3715782 00:14:17.932 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:17.932 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:17.932 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:17.932 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:17.932 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:17.932 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:17.932 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:17.932 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:17.932 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:17.932 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.932 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.932 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.845 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:19.845 00:14:19.845 real 0m28.423s 00:14:19.845 user 0m32.129s 00:14:19.845 sys 0m8.253s 00:14:19.845 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:19.845 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:19.845 ************************************ 00:14:19.845 END TEST nvmf_ns_masking 00:14:19.845 ************************************ 00:14:19.845 15:26:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:19.845 15:26:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:19.845 15:26:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:19.845 15:26:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:19.845 15:26:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:20.107 ************************************ 00:14:20.107 START TEST nvmf_nvme_cli 00:14:20.107 ************************************ 00:14:20.107 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:20.107 * Looking for test storage... 00:14:20.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:20.107 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:20.107 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:14:20.107 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:20.107 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:20.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.108 --rc genhtml_branch_coverage=1 00:14:20.108 --rc genhtml_function_coverage=1 00:14:20.108 --rc genhtml_legend=1 00:14:20.108 --rc geninfo_all_blocks=1 00:14:20.108 --rc geninfo_unexecuted_blocks=1 00:14:20.108 00:14:20.108 ' 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:20.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.108 --rc genhtml_branch_coverage=1 00:14:20.108 --rc genhtml_function_coverage=1 00:14:20.108 --rc genhtml_legend=1 00:14:20.108 --rc geninfo_all_blocks=1 00:14:20.108 --rc geninfo_unexecuted_blocks=1 00:14:20.108 00:14:20.108 ' 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:20.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.108 --rc genhtml_branch_coverage=1 00:14:20.108 --rc genhtml_function_coverage=1 00:14:20.108 --rc genhtml_legend=1 00:14:20.108 --rc geninfo_all_blocks=1 00:14:20.108 --rc geninfo_unexecuted_blocks=1 00:14:20.108 00:14:20.108 ' 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:20.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.108 --rc genhtml_branch_coverage=1 00:14:20.108 --rc genhtml_function_coverage=1 00:14:20.108 --rc genhtml_legend=1 00:14:20.108 --rc geninfo_all_blocks=1 00:14:20.108 --rc geninfo_unexecuted_blocks=1 00:14:20.108 00:14:20.108 ' 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:20.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:20.108 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.369 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:20.370 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:20.370 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:20.370 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:28.513 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:28.513 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:28.513 Found net devices under 0000:31:00.0: cvl_0_0 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:28.513 Found net devices under 0000:31:00.1: cvl_0_1 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:28.513 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:28.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:28.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:14:28.514 00:14:28.514 --- 10.0.0.2 ping statistics --- 00:14:28.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.514 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:28.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:28.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:14:28.514 00:14:28.514 --- 10.0.0.1 ping statistics --- 00:14:28.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.514 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3723782 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3723782 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 3723782 ']' 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:28.514 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.514 [2024-11-06 15:26:45.754479] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:14:28.514 [2024-11-06 15:26:45.754543] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.514 [2024-11-06 15:26:45.856507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:28.514 [2024-11-06 15:26:45.911833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.514 [2024-11-06 15:26:45.911887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.514 [2024-11-06 15:26:45.911896] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.514 [2024-11-06 15:26:45.911903] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.514 [2024-11-06 15:26:45.911910] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.514 [2024-11-06 15:26:45.913944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.514 [2024-11-06 15:26:45.914070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.514 [2024-11-06 15:26:45.914227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:28.514 [2024-11-06 15:26:45.914228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.776 [2024-11-06 15:26:46.640037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.776 Malloc0 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.776 Malloc1 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.776 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.776 [2024-11-06 15:26:46.754097] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.038 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.038 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:29.038 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.038 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.038 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.038 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 4420 00:14:29.038 00:14:29.038 Discovery Log Number of Records 2, Generation counter 2 00:14:29.038 =====Discovery Log Entry 0====== 00:14:29.038 trtype: tcp 00:14:29.038 adrfam: ipv4 00:14:29.038 subtype: current discovery subsystem 00:14:29.038 treq: not required 00:14:29.038 portid: 0 00:14:29.038 trsvcid: 4420 00:14:29.038 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:29.038 traddr: 10.0.0.2 00:14:29.038 eflags: explicit discovery connections, duplicate discovery information 00:14:29.038 sectype: none 00:14:29.038 =====Discovery Log Entry 1====== 00:14:29.038 trtype: tcp 00:14:29.038 adrfam: ipv4 00:14:29.038 subtype: nvme subsystem 00:14:29.038 treq: not required 00:14:29.038 portid: 0 00:14:29.038 trsvcid: 4420 00:14:29.038 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:29.038 traddr: 10.0.0.2 00:14:29.038 eflags: none 00:14:29.038 sectype: none 00:14:29.038 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:29.038 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:29.038 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:29.038 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:29.038 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:29.038 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:29.038 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:29.038 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:29.038 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:29.038 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:29.038 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:30.953 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:30.953 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:14:30.953 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:30.953 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:30.953 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:30.953 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:32.867 /dev/nvme0n2 ]] 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:32.867 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:32.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:32.868 rmmod nvme_tcp 00:14:32.868 rmmod nvme_fabrics 00:14:32.868 rmmod nvme_keyring 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3723782 ']' 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3723782 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 3723782 ']' 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 3723782 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:32.868 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3723782 00:14:33.129 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:33.129 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:33.129 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3723782' 00:14:33.129 killing process with pid 3723782 00:14:33.129 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 3723782 00:14:33.129 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 3723782 00:14:33.129 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:33.129 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:33.129 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:33.129 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:33.129 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:33.129 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:33.129 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:33.129 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:33.129 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:33.129 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.129 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:33.129 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:35.759 00:14:35.759 real 0m15.266s 00:14:35.759 user 0m22.671s 00:14:35.759 sys 0m6.465s 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.759 ************************************ 00:14:35.759 END TEST nvmf_nvme_cli 00:14:35.759 ************************************ 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:35.759 ************************************ 00:14:35.759 START TEST nvmf_vfio_user 00:14:35.759 ************************************ 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:35.759 * Looking for test storage... 00:14:35.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:35.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.759 --rc genhtml_branch_coverage=1 00:14:35.759 --rc genhtml_function_coverage=1 00:14:35.759 --rc genhtml_legend=1 00:14:35.759 --rc geninfo_all_blocks=1 00:14:35.759 --rc geninfo_unexecuted_blocks=1 00:14:35.759 00:14:35.759 ' 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:35.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.759 --rc genhtml_branch_coverage=1 00:14:35.759 --rc genhtml_function_coverage=1 00:14:35.759 --rc genhtml_legend=1 00:14:35.759 --rc geninfo_all_blocks=1 00:14:35.759 --rc geninfo_unexecuted_blocks=1 00:14:35.759 00:14:35.759 ' 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:35.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.759 --rc genhtml_branch_coverage=1 00:14:35.759 --rc genhtml_function_coverage=1 00:14:35.759 --rc genhtml_legend=1 00:14:35.759 --rc geninfo_all_blocks=1 00:14:35.759 --rc geninfo_unexecuted_blocks=1 00:14:35.759 00:14:35.759 ' 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:35.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.759 --rc genhtml_branch_coverage=1 00:14:35.759 --rc genhtml_function_coverage=1 00:14:35.759 --rc genhtml_legend=1 00:14:35.759 --rc geninfo_all_blocks=1 00:14:35.759 --rc geninfo_unexecuted_blocks=1 00:14:35.759 00:14:35.759 ' 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.759 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:35.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3725521 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3725521' 00:14:35.760 Process pid: 3725521 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3725521 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 3725521 ']' 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:35.760 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:35.760 [2024-11-06 15:26:53.506710] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:14:35.760 [2024-11-06 15:26:53.506784] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.760 [2024-11-06 15:26:53.595861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:35.760 [2024-11-06 15:26:53.627269] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.760 [2024-11-06 15:26:53.627301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.760 [2024-11-06 15:26:53.627307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.760 [2024-11-06 15:26:53.627312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.760 [2024-11-06 15:26:53.627316] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.760 [2024-11-06 15:26:53.628838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.760 [2024-11-06 15:26:53.629084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.760 [2024-11-06 15:26:53.629231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.760 [2024-11-06 15:26:53.629232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:36.330 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:36.330 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:14:36.330 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:37.715 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:37.715 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:37.715 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:37.715 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:37.715 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:37.715 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:37.715 Malloc1 00:14:37.976 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:37.976 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:38.237 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:38.499 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:38.499 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:38.499 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:38.499 Malloc2 00:14:38.499 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:38.760 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:39.021 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:39.021 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:39.021 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:39.284 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:39.284 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:39.284 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:39.284 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:39.284 [2024-11-06 15:26:57.027573] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:14:39.284 [2024-11-06 15:26:57.027611] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3726217 ] 00:14:39.284 [2024-11-06 15:26:57.067055] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:39.284 [2024-11-06 15:26:57.072385] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:39.284 [2024-11-06 15:26:57.072403] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f62010fb000 00:14:39.284 [2024-11-06 15:26:57.073386] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:39.284 [2024-11-06 15:26:57.074391] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:39.284 [2024-11-06 15:26:57.075393] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:39.284 [2024-11-06 15:26:57.076405] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:39.284 [2024-11-06 15:26:57.077403] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:39.284 [2024-11-06 15:26:57.078407] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:39.284 [2024-11-06 15:26:57.079420] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:39.284 [2024-11-06 15:26:57.080419] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:39.284 [2024-11-06 15:26:57.081433] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:39.284 [2024-11-06 15:26:57.081439] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f62010f0000 00:14:39.284 [2024-11-06 15:26:57.082352] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:39.284 [2024-11-06 15:26:57.091809] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:39.284 [2024-11-06 15:26:57.091831] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:39.284 [2024-11-06 15:26:57.097540] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:39.284 [2024-11-06 15:26:57.097574] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:39.285 [2024-11-06 15:26:57.097632] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:39.285 [2024-11-06 15:26:57.097646] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:39.285 [2024-11-06 15:26:57.097650] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:39.285 [2024-11-06 15:26:57.098538] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:39.285 [2024-11-06 15:26:57.098546] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:39.285 [2024-11-06 15:26:57.098551] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:39.285 [2024-11-06 15:26:57.099545] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:39.285 [2024-11-06 15:26:57.099554] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:39.285 [2024-11-06 15:26:57.099559] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:39.285 [2024-11-06 15:26:57.100550] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:39.285 [2024-11-06 15:26:57.100556] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:39.285 [2024-11-06 15:26:57.101550] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:39.285 [2024-11-06 15:26:57.101556] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:39.285 [2024-11-06 15:26:57.101559] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:39.285 [2024-11-06 15:26:57.101564] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:39.285 [2024-11-06 15:26:57.101670] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:39.285 [2024-11-06 15:26:57.101674] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:39.285 [2024-11-06 15:26:57.101678] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:39.285 [2024-11-06 15:26:57.102557] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:39.285 [2024-11-06 15:26:57.103558] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:39.285 [2024-11-06 15:26:57.104564] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:39.285 [2024-11-06 15:26:57.105562] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:39.285 [2024-11-06 15:26:57.105627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:39.285 [2024-11-06 15:26:57.106572] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:39.285 [2024-11-06 15:26:57.106578] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:39.285 [2024-11-06 15:26:57.106581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:39.285 [2024-11-06 15:26:57.106596] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:39.285 [2024-11-06 15:26:57.106606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:39.285 [2024-11-06 15:26:57.106618] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:39.285 [2024-11-06 15:26:57.106622] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:39.285 [2024-11-06 15:26:57.106624] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:39.285 [2024-11-06 15:26:57.106636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:39.285 [2024-11-06 15:26:57.106674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:39.285 [2024-11-06 15:26:57.106682] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:39.285 [2024-11-06 15:26:57.106685] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:39.285 [2024-11-06 15:26:57.106688] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:39.285 [2024-11-06 15:26:57.106692] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:39.285 [2024-11-06 15:26:57.106697] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:39.285 [2024-11-06 15:26:57.106700] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:39.285 [2024-11-06 15:26:57.106704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:39.285 [2024-11-06 15:26:57.106711] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:39.285 [2024-11-06 15:26:57.106719] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:39.285 [2024-11-06 15:26:57.106730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:39.285 [2024-11-06 15:26:57.106739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.285 [2024-11-06 15:26:57.106748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.285 [2024-11-06 15:26:57.106755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.285 [2024-11-06 15:26:57.106761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.285 [2024-11-06 15:26:57.106764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:39.285 [2024-11-06 15:26:57.106769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:39.285 [2024-11-06 15:26:57.106776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:39.285 [2024-11-06 15:26:57.106787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:39.285 [2024-11-06 15:26:57.106793] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:39.285 [2024-11-06 15:26:57.106796] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:39.285 [2024-11-06 15:26:57.106801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:39.285 [2024-11-06 15:26:57.106806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:39.285 [2024-11-06 15:26:57.106812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:39.285 [2024-11-06 15:26:57.106821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:39.285 [2024-11-06 15:26:57.106865] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:39.285 [2024-11-06 15:26:57.106871] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:39.285 [2024-11-06 15:26:57.106876] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:39.285 [2024-11-06 15:26:57.106879] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:39.285 [2024-11-06 15:26:57.106882] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:39.285 [2024-11-06 15:26:57.106886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:39.285 [2024-11-06 15:26:57.106899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:39.285 [2024-11-06 15:26:57.106906] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:39.285 [2024-11-06 15:26:57.106914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:39.285 [2024-11-06 15:26:57.106919] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:39.285 [2024-11-06 15:26:57.106924] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:39.285 [2024-11-06 15:26:57.106927] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:39.285 [2024-11-06 15:26:57.106930] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:39.285 [2024-11-06 15:26:57.106934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:39.286 [2024-11-06 15:26:57.106951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:39.286 [2024-11-06 15:26:57.106961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:39.286 [2024-11-06 15:26:57.106967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:39.286 [2024-11-06 15:26:57.106972] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:39.286 [2024-11-06 15:26:57.106975] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:39.286 [2024-11-06 15:26:57.106978] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:39.286 [2024-11-06 15:26:57.106982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:39.286 [2024-11-06 15:26:57.106995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:39.286 [2024-11-06 15:26:57.107001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:39.286 [2024-11-06 15:26:57.107006] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:39.286 [2024-11-06 15:26:57.107012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:39.286 [2024-11-06 15:26:57.107019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:39.286 [2024-11-06 15:26:57.107023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:39.286 [2024-11-06 15:26:57.107026] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:39.286 [2024-11-06 15:26:57.107030] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:39.286 [2024-11-06 15:26:57.107033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:39.286 [2024-11-06 15:26:57.107037] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:39.286 [2024-11-06 15:26:57.107051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:39.286 [2024-11-06 15:26:57.107061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:39.286 [2024-11-06 15:26:57.107069] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:39.286 [2024-11-06 15:26:57.107078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:39.286 [2024-11-06 15:26:57.107086] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:39.286 [2024-11-06 15:26:57.107098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:39.286 [2024-11-06 15:26:57.107106] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:39.286 [2024-11-06 15:26:57.107112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:39.286 [2024-11-06 15:26:57.107123] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:39.286 [2024-11-06 15:26:57.107127] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:39.286 [2024-11-06 15:26:57.107129] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:39.286 [2024-11-06 15:26:57.107132] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:39.286 [2024-11-06 15:26:57.107134] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:39.286 [2024-11-06 15:26:57.107139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:39.286 [2024-11-06 15:26:57.107145] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:39.286 [2024-11-06 15:26:57.107148] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:39.286 [2024-11-06 15:26:57.107150] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:39.286 [2024-11-06 15:26:57.107154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:39.286 [2024-11-06 15:26:57.107160] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:39.286 [2024-11-06 15:26:57.107163] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:39.286 [2024-11-06 15:26:57.107165] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:39.286 [2024-11-06 15:26:57.107169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:39.286 [2024-11-06 15:26:57.107176] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:39.286 [2024-11-06 15:26:57.107179] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:39.286 [2024-11-06 15:26:57.107182] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:39.286 [2024-11-06 15:26:57.107186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:39.286 [2024-11-06 15:26:57.107191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:39.286 [2024-11-06 15:26:57.107200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:39.286 [2024-11-06 15:26:57.107209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:39.286 [2024-11-06 15:26:57.107214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:39.286 ===================================================== 00:14:39.286 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:39.286 ===================================================== 00:14:39.286 Controller Capabilities/Features 00:14:39.286 ================================ 00:14:39.286 Vendor ID: 4e58 00:14:39.286 Subsystem Vendor ID: 4e58 00:14:39.286 Serial Number: SPDK1 00:14:39.286 Model Number: SPDK bdev Controller 00:14:39.286 Firmware Version: 25.01 00:14:39.286 Recommended Arb Burst: 6 00:14:39.286 IEEE OUI Identifier: 8d 6b 50 00:14:39.286 Multi-path I/O 00:14:39.286 May have multiple subsystem ports: Yes 00:14:39.286 May have multiple controllers: Yes 00:14:39.286 Associated with SR-IOV VF: No 00:14:39.286 Max Data Transfer Size: 131072 00:14:39.286 Max Number of Namespaces: 32 00:14:39.286 Max Number of I/O Queues: 127 00:14:39.286 NVMe Specification Version (VS): 1.3 00:14:39.286 NVMe Specification Version (Identify): 1.3 00:14:39.286 Maximum Queue Entries: 256 00:14:39.286 Contiguous Queues Required: Yes 00:14:39.286 Arbitration Mechanisms Supported 00:14:39.286 Weighted Round Robin: Not Supported 00:14:39.286 Vendor Specific: Not Supported 00:14:39.286 Reset Timeout: 15000 ms 00:14:39.286 Doorbell Stride: 4 bytes 00:14:39.286 NVM Subsystem Reset: Not Supported 00:14:39.286 Command Sets Supported 00:14:39.286 NVM Command Set: Supported 00:14:39.286 Boot Partition: Not Supported 00:14:39.286 Memory Page Size Minimum: 4096 bytes 00:14:39.286 Memory Page Size Maximum: 4096 bytes 00:14:39.286 Persistent Memory Region: Not Supported 00:14:39.286 Optional Asynchronous Events Supported 00:14:39.286 Namespace Attribute Notices: Supported 00:14:39.286 Firmware Activation Notices: Not Supported 00:14:39.286 ANA Change Notices: Not Supported 00:14:39.286 PLE Aggregate Log Change Notices: Not Supported 00:14:39.286 LBA Status Info Alert Notices: Not Supported 00:14:39.286 EGE Aggregate Log Change Notices: Not Supported 00:14:39.286 Normal NVM Subsystem Shutdown event: Not Supported 00:14:39.286 Zone Descriptor Change Notices: Not Supported 00:14:39.286 Discovery Log Change Notices: Not Supported 00:14:39.286 Controller Attributes 00:14:39.286 128-bit Host Identifier: Supported 00:14:39.286 Non-Operational Permissive Mode: Not Supported 00:14:39.286 NVM Sets: Not Supported 00:14:39.286 Read Recovery Levels: Not Supported 00:14:39.286 Endurance Groups: Not Supported 00:14:39.286 Predictable Latency Mode: Not Supported 00:14:39.286 Traffic Based Keep ALive: Not Supported 00:14:39.286 Namespace Granularity: Not Supported 00:14:39.286 SQ Associations: Not Supported 00:14:39.287 UUID List: Not Supported 00:14:39.287 Multi-Domain Subsystem: Not Supported 00:14:39.287 Fixed Capacity Management: Not Supported 00:14:39.287 Variable Capacity Management: Not Supported 00:14:39.287 Delete Endurance Group: Not Supported 00:14:39.287 Delete NVM Set: Not Supported 00:14:39.287 Extended LBA Formats Supported: Not Supported 00:14:39.287 Flexible Data Placement Supported: Not Supported 00:14:39.287 00:14:39.287 Controller Memory Buffer Support 00:14:39.287 ================================ 00:14:39.287 Supported: No 00:14:39.287 00:14:39.287 Persistent Memory Region Support 00:14:39.287 ================================ 00:14:39.287 Supported: No 00:14:39.287 00:14:39.287 Admin Command Set Attributes 00:14:39.287 ============================ 00:14:39.287 Security Send/Receive: Not Supported 00:14:39.287 Format NVM: Not Supported 00:14:39.287 Firmware Activate/Download: Not Supported 00:14:39.287 Namespace Management: Not Supported 00:14:39.287 Device Self-Test: Not Supported 00:14:39.287 Directives: Not Supported 00:14:39.287 NVMe-MI: Not Supported 00:14:39.287 Virtualization Management: Not Supported 00:14:39.287 Doorbell Buffer Config: Not Supported 00:14:39.287 Get LBA Status Capability: Not Supported 00:14:39.287 Command & Feature Lockdown Capability: Not Supported 00:14:39.287 Abort Command Limit: 4 00:14:39.287 Async Event Request Limit: 4 00:14:39.287 Number of Firmware Slots: N/A 00:14:39.287 Firmware Slot 1 Read-Only: N/A 00:14:39.287 Firmware Activation Without Reset: N/A 00:14:39.287 Multiple Update Detection Support: N/A 00:14:39.287 Firmware Update Granularity: No Information Provided 00:14:39.287 Per-Namespace SMART Log: No 00:14:39.287 Asymmetric Namespace Access Log Page: Not Supported 00:14:39.287 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:39.287 Command Effects Log Page: Supported 00:14:39.287 Get Log Page Extended Data: Supported 00:14:39.287 Telemetry Log Pages: Not Supported 00:14:39.287 Persistent Event Log Pages: Not Supported 00:14:39.287 Supported Log Pages Log Page: May Support 00:14:39.287 Commands Supported & Effects Log Page: Not Supported 00:14:39.287 Feature Identifiers & Effects Log Page:May Support 00:14:39.287 NVMe-MI Commands & Effects Log Page: May Support 00:14:39.287 Data Area 4 for Telemetry Log: Not Supported 00:14:39.287 Error Log Page Entries Supported: 128 00:14:39.287 Keep Alive: Supported 00:14:39.287 Keep Alive Granularity: 10000 ms 00:14:39.287 00:14:39.287 NVM Command Set Attributes 00:14:39.287 ========================== 00:14:39.287 Submission Queue Entry Size 00:14:39.287 Max: 64 00:14:39.287 Min: 64 00:14:39.287 Completion Queue Entry Size 00:14:39.287 Max: 16 00:14:39.287 Min: 16 00:14:39.287 Number of Namespaces: 32 00:14:39.287 Compare Command: Supported 00:14:39.287 Write Uncorrectable Command: Not Supported 00:14:39.287 Dataset Management Command: Supported 00:14:39.287 Write Zeroes Command: Supported 00:14:39.287 Set Features Save Field: Not Supported 00:14:39.287 Reservations: Not Supported 00:14:39.287 Timestamp: Not Supported 00:14:39.287 Copy: Supported 00:14:39.287 Volatile Write Cache: Present 00:14:39.287 Atomic Write Unit (Normal): 1 00:14:39.287 Atomic Write Unit (PFail): 1 00:14:39.287 Atomic Compare & Write Unit: 1 00:14:39.287 Fused Compare & Write: Supported 00:14:39.287 Scatter-Gather List 00:14:39.287 SGL Command Set: Supported (Dword aligned) 00:14:39.287 SGL Keyed: Not Supported 00:14:39.287 SGL Bit Bucket Descriptor: Not Supported 00:14:39.287 SGL Metadata Pointer: Not Supported 00:14:39.287 Oversized SGL: Not Supported 00:14:39.287 SGL Metadata Address: Not Supported 00:14:39.287 SGL Offset: Not Supported 00:14:39.287 Transport SGL Data Block: Not Supported 00:14:39.287 Replay Protected Memory Block: Not Supported 00:14:39.287 00:14:39.287 Firmware Slot Information 00:14:39.287 ========================= 00:14:39.287 Active slot: 1 00:14:39.287 Slot 1 Firmware Revision: 25.01 00:14:39.287 00:14:39.287 00:14:39.287 Commands Supported and Effects 00:14:39.287 ============================== 00:14:39.287 Admin Commands 00:14:39.287 -------------- 00:14:39.287 Get Log Page (02h): Supported 00:14:39.287 Identify (06h): Supported 00:14:39.287 Abort (08h): Supported 00:14:39.287 Set Features (09h): Supported 00:14:39.287 Get Features (0Ah): Supported 00:14:39.287 Asynchronous Event Request (0Ch): Supported 00:14:39.287 Keep Alive (18h): Supported 00:14:39.287 I/O Commands 00:14:39.287 ------------ 00:14:39.287 Flush (00h): Supported LBA-Change 00:14:39.287 Write (01h): Supported LBA-Change 00:14:39.287 Read (02h): Supported 00:14:39.287 Compare (05h): Supported 00:14:39.287 Write Zeroes (08h): Supported LBA-Change 00:14:39.287 Dataset Management (09h): Supported LBA-Change 00:14:39.287 Copy (19h): Supported LBA-Change 00:14:39.287 00:14:39.287 Error Log 00:14:39.287 ========= 00:14:39.287 00:14:39.287 Arbitration 00:14:39.287 =========== 00:14:39.287 Arbitration Burst: 1 00:14:39.287 00:14:39.287 Power Management 00:14:39.287 ================ 00:14:39.287 Number of Power States: 1 00:14:39.287 Current Power State: Power State #0 00:14:39.287 Power State #0: 00:14:39.287 Max Power: 0.00 W 00:14:39.287 Non-Operational State: Operational 00:14:39.287 Entry Latency: Not Reported 00:14:39.287 Exit Latency: Not Reported 00:14:39.287 Relative Read Throughput: 0 00:14:39.287 Relative Read Latency: 0 00:14:39.287 Relative Write Throughput: 0 00:14:39.287 Relative Write Latency: 0 00:14:39.287 Idle Power: Not Reported 00:14:39.287 Active Power: Not Reported 00:14:39.287 Non-Operational Permissive Mode: Not Supported 00:14:39.287 00:14:39.287 Health Information 00:14:39.287 ================== 00:14:39.287 Critical Warnings: 00:14:39.287 Available Spare Space: OK 00:14:39.287 Temperature: OK 00:14:39.287 Device Reliability: OK 00:14:39.287 Read Only: No 00:14:39.287 Volatile Memory Backup: OK 00:14:39.287 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:39.287 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:39.287 Available Spare: 0% 00:14:39.287 Available Sp[2024-11-06 15:26:57.107286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:39.287 [2024-11-06 15:26:57.107294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:39.287 [2024-11-06 15:26:57.107315] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:39.287 [2024-11-06 15:26:57.107322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.287 [2024-11-06 15:26:57.107326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.287 [2024-11-06 15:26:57.107331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.287 [2024-11-06 15:26:57.107335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.287 [2024-11-06 15:26:57.107583] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:39.287 [2024-11-06 15:26:57.107591] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:39.287 [2024-11-06 15:26:57.108581] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:39.287 [2024-11-06 15:26:57.108622] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:39.287 [2024-11-06 15:26:57.108627] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:39.287 [2024-11-06 15:26:57.109590] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:39.287 [2024-11-06 15:26:57.109598] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:39.287 [2024-11-06 15:26:57.109650] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:39.287 [2024-11-06 15:26:57.111754] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:39.287 are Threshold: 0% 00:14:39.287 Life Percentage Used: 0% 00:14:39.287 Data Units Read: 0 00:14:39.287 Data Units Written: 0 00:14:39.287 Host Read Commands: 0 00:14:39.287 Host Write Commands: 0 00:14:39.287 Controller Busy Time: 0 minutes 00:14:39.287 Power Cycles: 0 00:14:39.287 Power On Hours: 0 hours 00:14:39.287 Unsafe Shutdowns: 0 00:14:39.287 Unrecoverable Media Errors: 0 00:14:39.288 Lifetime Error Log Entries: 0 00:14:39.288 Warning Temperature Time: 0 minutes 00:14:39.288 Critical Temperature Time: 0 minutes 00:14:39.288 00:14:39.288 Number of Queues 00:14:39.288 ================ 00:14:39.288 Number of I/O Submission Queues: 127 00:14:39.288 Number of I/O Completion Queues: 127 00:14:39.288 00:14:39.288 Active Namespaces 00:14:39.288 ================= 00:14:39.288 Namespace ID:1 00:14:39.288 Error Recovery Timeout: Unlimited 00:14:39.288 Command Set Identifier: NVM (00h) 00:14:39.288 Deallocate: Supported 00:14:39.288 Deallocated/Unwritten Error: Not Supported 00:14:39.288 Deallocated Read Value: Unknown 00:14:39.288 Deallocate in Write Zeroes: Not Supported 00:14:39.288 Deallocated Guard Field: 0xFFFF 00:14:39.288 Flush: Supported 00:14:39.288 Reservation: Supported 00:14:39.288 Namespace Sharing Capabilities: Multiple Controllers 00:14:39.288 Size (in LBAs): 131072 (0GiB) 00:14:39.288 Capacity (in LBAs): 131072 (0GiB) 00:14:39.288 Utilization (in LBAs): 131072 (0GiB) 00:14:39.288 NGUID: 58F2B1902E6246DFB19FF8541E42427D 00:14:39.288 UUID: 58f2b190-2e62-46df-b19f-f8541e42427d 00:14:39.288 Thin Provisioning: Not Supported 00:14:39.288 Per-NS Atomic Units: Yes 00:14:39.288 Atomic Boundary Size (Normal): 0 00:14:39.288 Atomic Boundary Size (PFail): 0 00:14:39.288 Atomic Boundary Offset: 0 00:14:39.288 Maximum Single Source Range Length: 65535 00:14:39.288 Maximum Copy Length: 65535 00:14:39.288 Maximum Source Range Count: 1 00:14:39.288 NGUID/EUI64 Never Reused: No 00:14:39.288 Namespace Write Protected: No 00:14:39.288 Number of LBA Formats: 1 00:14:39.288 Current LBA Format: LBA Format #00 00:14:39.288 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:39.288 00:14:39.288 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:39.549 [2024-11-06 15:26:57.299425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:44.837 Initializing NVMe Controllers 00:14:44.837 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:44.837 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:44.837 Initialization complete. Launching workers. 00:14:44.837 ======================================================== 00:14:44.837 Latency(us) 00:14:44.837 Device Information : IOPS MiB/s Average min max 00:14:44.837 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40000.80 156.25 3199.80 842.18 9785.53 00:14:44.837 ======================================================== 00:14:44.837 Total : 40000.80 156.25 3199.80 842.18 9785.53 00:14:44.837 00:14:44.837 [2024-11-06 15:27:02.316279] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:44.837 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:44.837 [2024-11-06 15:27:02.507148] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:50.123 Initializing NVMe Controllers 00:14:50.123 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:50.123 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:50.123 Initialization complete. Launching workers. 00:14:50.123 ======================================================== 00:14:50.123 Latency(us) 00:14:50.123 Device Information : IOPS MiB/s Average min max 00:14:50.123 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15974.40 62.40 8019.10 5983.67 15962.12 00:14:50.123 ======================================================== 00:14:50.123 Total : 15974.40 62.40 8019.10 5983.67 15962.12 00:14:50.123 00:14:50.123 [2024-11-06 15:27:07.544303] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:50.123 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:50.123 [2024-11-06 15:27:07.746137] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:55.411 [2024-11-06 15:27:12.823948] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:55.411 Initializing NVMe Controllers 00:14:55.411 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:55.411 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:55.411 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:55.411 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:55.411 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:55.411 Initialization complete. Launching workers. 00:14:55.411 Starting thread on core 2 00:14:55.411 Starting thread on core 3 00:14:55.411 Starting thread on core 1 00:14:55.411 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:55.411 [2024-11-06 15:27:13.075336] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:59.615 [2024-11-06 15:27:17.108877] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:59.615 Initializing NVMe Controllers 00:14:59.615 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:59.615 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:59.615 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:59.615 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:59.615 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:59.615 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:59.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:59.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:59.615 Initialization complete. Launching workers. 00:14:59.615 Starting thread on core 1 with urgent priority queue 00:14:59.615 Starting thread on core 2 with urgent priority queue 00:14:59.615 Starting thread on core 3 with urgent priority queue 00:14:59.615 Starting thread on core 0 with urgent priority queue 00:14:59.615 SPDK bdev Controller (SPDK1 ) core 0: 5685.33 IO/s 17.59 secs/100000 ios 00:14:59.615 SPDK bdev Controller (SPDK1 ) core 1: 6042.00 IO/s 16.55 secs/100000 ios 00:14:59.615 SPDK bdev Controller (SPDK1 ) core 2: 5093.00 IO/s 19.63 secs/100000 ios 00:14:59.615 SPDK bdev Controller (SPDK1 ) core 3: 8057.67 IO/s 12.41 secs/100000 ios 00:14:59.615 ======================================================== 00:14:59.615 00:14:59.615 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:59.615 [2024-11-06 15:27:17.352149] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:59.615 Initializing NVMe Controllers 00:14:59.615 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:59.615 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:59.615 Namespace ID: 1 size: 0GB 00:14:59.615 Initialization complete. 00:14:59.615 INFO: using host memory buffer for IO 00:14:59.615 Hello world! 00:14:59.615 [2024-11-06 15:27:17.388371] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:59.615 15:27:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:59.876 [2024-11-06 15:27:17.621441] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:00.819 Initializing NVMe Controllers 00:15:00.819 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:00.819 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:00.819 Initialization complete. Launching workers. 00:15:00.819 submit (in ns) avg, min, max = 5398.5, 2843.3, 3998438.3 00:15:00.819 complete (in ns) avg, min, max = 18285.4, 1631.7, 4004618.3 00:15:00.819 00:15:00.819 Submit histogram 00:15:00.819 ================ 00:15:00.819 Range in us Cumulative Count 00:15:00.819 2.840 - 2.853: 0.0098% ( 2) 00:15:00.819 2.853 - 2.867: 0.0244% ( 3) 00:15:00.819 2.867 - 2.880: 0.0586% ( 7) 00:15:00.819 2.880 - 2.893: 0.2003% ( 29) 00:15:00.819 2.893 - 2.907: 0.5423% ( 70) 00:15:00.819 2.907 - 2.920: 1.2899% ( 153) 00:15:00.819 2.920 - 2.933: 2.7361% ( 296) 00:15:00.819 2.933 - 2.947: 5.1888% ( 502) 00:15:00.819 2.947 - 2.960: 8.8337% ( 746) 00:15:00.819 2.960 - 2.973: 13.7392% ( 1004) 00:15:00.819 2.973 - 2.987: 19.4899% ( 1177) 00:15:00.819 2.987 - 3.000: 24.9328% ( 1114) 00:15:00.819 3.000 - 3.013: 30.7226% ( 1185) 00:15:00.819 3.013 - 3.027: 36.8202% ( 1248) 00:15:00.819 3.027 - 3.040: 43.0498% ( 1275) 00:15:00.819 3.040 - 3.053: 49.3086% ( 1281) 00:15:00.819 3.053 - 3.067: 56.2564% ( 1422) 00:15:00.819 3.067 - 3.080: 64.6016% ( 1708) 00:15:00.819 3.080 - 3.093: 73.0640% ( 1732) 00:15:00.819 3.093 - 3.107: 80.7006% ( 1563) 00:15:00.819 3.107 - 3.120: 87.3846% ( 1368) 00:15:00.819 3.120 - 3.133: 92.8177% ( 1112) 00:15:00.819 3.133 - 3.147: 96.5310% ( 760) 00:15:00.819 3.147 - 3.160: 98.2606% ( 354) 00:15:00.819 3.160 - 3.173: 99.0814% ( 168) 00:15:00.819 3.173 - 3.187: 99.3502% ( 55) 00:15:00.819 3.187 - 3.200: 99.5114% ( 33) 00:15:00.819 3.200 - 3.213: 99.5505% ( 8) 00:15:00.819 3.213 - 3.227: 99.5652% ( 3) 00:15:00.819 3.333 - 3.347: 99.5700% ( 1) 00:15:00.819 3.360 - 3.373: 99.5749% ( 1) 00:15:00.819 3.373 - 3.387: 99.5798% ( 1) 00:15:00.819 3.413 - 3.440: 99.5847% ( 1) 00:15:00.819 3.493 - 3.520: 99.5945% ( 2) 00:15:00.819 3.573 - 3.600: 99.5994% ( 1) 00:15:00.819 3.600 - 3.627: 99.6042% ( 1) 00:15:00.819 3.813 - 3.840: 99.6140% ( 2) 00:15:00.819 3.840 - 3.867: 99.6189% ( 1) 00:15:00.819 3.893 - 3.920: 99.6238% ( 1) 00:15:00.819 4.133 - 4.160: 99.6287% ( 1) 00:15:00.819 4.213 - 4.240: 99.6336% ( 1) 00:15:00.819 4.507 - 4.533: 99.6384% ( 1) 00:15:00.819 4.587 - 4.613: 99.6433% ( 1) 00:15:00.819 4.613 - 4.640: 99.6482% ( 1) 00:15:00.819 4.747 - 4.773: 99.6531% ( 1) 00:15:00.819 4.800 - 4.827: 99.6629% ( 2) 00:15:00.819 4.880 - 4.907: 99.6775% ( 3) 00:15:00.819 4.907 - 4.933: 99.6824% ( 1) 00:15:00.819 4.933 - 4.960: 99.7068% ( 5) 00:15:00.819 4.960 - 4.987: 99.7117% ( 1) 00:15:00.819 4.987 - 5.013: 99.7313% ( 4) 00:15:00.819 5.013 - 5.040: 99.7410% ( 2) 00:15:00.819 5.040 - 5.067: 99.7557% ( 3) 00:15:00.819 5.067 - 5.093: 99.7655% ( 2) 00:15:00.819 5.093 - 5.120: 99.7752% ( 2) 00:15:00.819 5.120 - 5.147: 99.7801% ( 1) 00:15:00.819 5.147 - 5.173: 99.7850% ( 1) 00:15:00.819 5.173 - 5.200: 99.7899% ( 1) 00:15:00.819 5.200 - 5.227: 99.7948% ( 1) 00:15:00.819 5.333 - 5.360: 99.7997% ( 1) 00:15:00.819 5.413 - 5.440: 99.8046% ( 1) 00:15:00.819 5.600 - 5.627: 99.8094% ( 1) 00:15:00.819 5.680 - 5.707: 99.8143% ( 1) 00:15:00.819 5.760 - 5.787: 99.8192% ( 1) 00:15:00.819 5.920 - 5.947: 99.8241% ( 1) 00:15:00.819 6.053 - 6.080: 99.8339% ( 2) 00:15:00.819 6.133 - 6.160: 99.8437% ( 2) 00:15:00.819 6.267 - 6.293: 99.8534% ( 2) 00:15:00.819 6.373 - 6.400: 99.8632% ( 2) 00:15:00.819 6.480 - 6.507: 99.8681% ( 1) 00:15:00.819 6.773 - 6.800: 99.8730% ( 1) 00:15:00.819 6.827 - 6.880: 99.8779% ( 1) 00:15:00.819 6.880 - 6.933: 99.8827% ( 1) 00:15:00.819 6.933 - 6.987: 99.8876% ( 1) 00:15:00.819 6.987 - 7.040: 99.8925% ( 1) 00:15:00.819 7.147 - 7.200: 99.9072% ( 3) 00:15:00.819 7.733 - 7.787: 99.9121% ( 1) 00:15:00.819 7.787 - 7.840: 99.9169% ( 1) 00:15:00.819 8.320 - 8.373: 99.9218% ( 1) 00:15:00.819 [2024-11-06 15:27:18.642005] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:00.819 8.587 - 8.640: 99.9267% ( 1) 00:15:00.819 8.747 - 8.800: 99.9316% ( 1) 00:15:00.819 9.067 - 9.120: 99.9365% ( 1) 00:15:00.819 15.147 - 15.253: 99.9414% ( 1) 00:15:00.819 3986.773 - 4014.080: 100.0000% ( 12) 00:15:00.819 00:15:00.819 Complete histogram 00:15:00.819 ================== 00:15:00.819 Range in us Cumulative Count 00:15:00.819 1.627 - 1.633: 0.0049% ( 1) 00:15:00.819 1.633 - 1.640: 0.0147% ( 2) 00:15:00.819 1.640 - 1.647: 0.5375% ( 107) 00:15:00.819 1.647 - 1.653: 1.0993% ( 115) 00:15:00.819 1.653 - 1.660: 1.1482% ( 10) 00:15:00.819 1.660 - 1.667: 1.2850% ( 28) 00:15:00.819 1.667 - 1.673: 1.3632% ( 16) 00:15:00.819 1.673 - 1.680: 1.3925% ( 6) 00:15:00.819 1.680 - 1.687: 1.4071% ( 3) 00:15:00.819 1.687 - 1.693: 1.4169% ( 2) 00:15:00.819 1.693 - 1.700: 1.5097% ( 19) 00:15:00.819 1.700 - 1.707: 20.2033% ( 3826) 00:15:00.819 1.707 - 1.720: 54.3069% ( 6980) 00:15:00.819 1.720 - 1.733: 73.4157% ( 3911) 00:15:00.819 1.733 - 1.747: 81.5948% ( 1674) 00:15:00.819 1.747 - 1.760: 83.2560% ( 340) 00:15:00.819 1.760 - 1.773: 86.4562% ( 655) 00:15:00.819 1.773 - 1.787: 91.8650% ( 1107) 00:15:00.819 1.787 - 1.800: 96.3258% ( 913) 00:15:00.819 1.800 - 1.813: 98.4414% ( 433) 00:15:00.819 1.813 - 1.827: 99.2427% ( 164) 00:15:00.819 1.827 - 1.840: 99.3795% ( 28) 00:15:00.819 1.840 - 1.853: 99.3844% ( 1) 00:15:00.819 1.867 - 1.880: 99.3941% ( 2) 00:15:00.819 1.973 - 1.987: 99.3990% ( 1) 00:15:00.819 3.173 - 3.187: 99.4039% ( 1) 00:15:00.819 3.360 - 3.373: 99.4088% ( 1) 00:15:00.819 3.387 - 3.400: 99.4137% ( 1) 00:15:00.819 3.400 - 3.413: 99.4186% ( 1) 00:15:00.819 3.413 - 3.440: 99.4235% ( 1) 00:15:00.819 3.440 - 3.467: 99.4283% ( 1) 00:15:00.819 3.467 - 3.493: 99.4381% ( 2) 00:15:00.819 3.547 - 3.573: 99.4479% ( 2) 00:15:00.819 3.573 - 3.600: 99.4528% ( 1) 00:15:00.819 3.627 - 3.653: 99.4577% ( 1) 00:15:00.819 3.760 - 3.787: 99.4674% ( 2) 00:15:00.820 3.787 - 3.813: 99.4772% ( 2) 00:15:00.820 3.813 - 3.840: 99.4821% ( 1) 00:15:00.820 3.840 - 3.867: 99.4919% ( 2) 00:15:00.820 3.947 - 3.973: 99.5016% ( 2) 00:15:00.820 4.133 - 4.160: 99.5065% ( 1) 00:15:00.820 4.187 - 4.213: 99.5114% ( 1) 00:15:00.820 4.453 - 4.480: 99.5212% ( 2) 00:15:00.820 4.507 - 4.533: 99.5261% ( 1) 00:15:00.820 4.773 - 4.800: 99.5310% ( 1) 00:15:00.820 4.933 - 4.960: 99.5358% ( 1) 00:15:00.820 5.040 - 5.067: 99.5407% ( 1) 00:15:00.820 5.227 - 5.253: 99.5456% ( 1) 00:15:00.820 5.387 - 5.413: 99.5505% ( 1) 00:15:00.820 5.440 - 5.467: 99.5603% ( 2) 00:15:00.820 5.920 - 5.947: 99.5652% ( 1) 00:15:00.820 6.213 - 6.240: 99.5700% ( 1) 00:15:00.820 6.293 - 6.320: 99.5749% ( 1) 00:15:00.820 6.613 - 6.640: 99.5798% ( 1) 00:15:00.820 10.827 - 10.880: 99.5847% ( 1) 00:15:00.820 3263.147 - 3276.800: 99.5896% ( 1) 00:15:00.820 3986.773 - 4014.080: 100.0000% ( 84) 00:15:00.820 00:15:00.820 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:00.820 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:00.820 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:00.820 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:00.820 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:01.081 [ 00:15:01.081 { 00:15:01.081 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:01.081 "subtype": "Discovery", 00:15:01.081 "listen_addresses": [], 00:15:01.081 "allow_any_host": true, 00:15:01.081 "hosts": [] 00:15:01.081 }, 00:15:01.081 { 00:15:01.081 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:01.081 "subtype": "NVMe", 00:15:01.081 "listen_addresses": [ 00:15:01.081 { 00:15:01.081 "trtype": "VFIOUSER", 00:15:01.081 "adrfam": "IPv4", 00:15:01.081 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:01.081 "trsvcid": "0" 00:15:01.081 } 00:15:01.081 ], 00:15:01.081 "allow_any_host": true, 00:15:01.081 "hosts": [], 00:15:01.081 "serial_number": "SPDK1", 00:15:01.081 "model_number": "SPDK bdev Controller", 00:15:01.081 "max_namespaces": 32, 00:15:01.081 "min_cntlid": 1, 00:15:01.081 "max_cntlid": 65519, 00:15:01.081 "namespaces": [ 00:15:01.081 { 00:15:01.081 "nsid": 1, 00:15:01.081 "bdev_name": "Malloc1", 00:15:01.081 "name": "Malloc1", 00:15:01.081 "nguid": "58F2B1902E6246DFB19FF8541E42427D", 00:15:01.081 "uuid": "58f2b190-2e62-46df-b19f-f8541e42427d" 00:15:01.081 } 00:15:01.081 ] 00:15:01.081 }, 00:15:01.081 { 00:15:01.081 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:01.081 "subtype": "NVMe", 00:15:01.081 "listen_addresses": [ 00:15:01.081 { 00:15:01.081 "trtype": "VFIOUSER", 00:15:01.081 "adrfam": "IPv4", 00:15:01.081 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:01.081 "trsvcid": "0" 00:15:01.081 } 00:15:01.081 ], 00:15:01.081 "allow_any_host": true, 00:15:01.081 "hosts": [], 00:15:01.081 "serial_number": "SPDK2", 00:15:01.081 "model_number": "SPDK bdev Controller", 00:15:01.081 "max_namespaces": 32, 00:15:01.081 "min_cntlid": 1, 00:15:01.081 "max_cntlid": 65519, 00:15:01.081 "namespaces": [ 00:15:01.081 { 00:15:01.081 "nsid": 1, 00:15:01.081 "bdev_name": "Malloc2", 00:15:01.081 "name": "Malloc2", 00:15:01.081 "nguid": "2268B3510C6049119546BE09EF64376C", 00:15:01.081 "uuid": "2268b351-0c60-4911-9546-be09ef64376c" 00:15:01.081 } 00:15:01.081 ] 00:15:01.081 } 00:15:01.081 ] 00:15:01.081 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:01.081 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3730993 00:15:01.081 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:01.081 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:01.081 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:15:01.081 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:01.081 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:01.081 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:15:01.081 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:01.081 15:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:01.081 [2024-11-06 15:27:19.017871] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:01.081 Malloc3 00:15:01.342 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:01.342 [2024-11-06 15:27:19.213224] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:01.342 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:01.342 Asynchronous Event Request test 00:15:01.342 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:01.342 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:01.342 Registering asynchronous event callbacks... 00:15:01.342 Starting namespace attribute notice tests for all controllers... 00:15:01.342 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:01.342 aer_cb - Changed Namespace 00:15:01.342 Cleaning up... 00:15:01.605 [ 00:15:01.605 { 00:15:01.605 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:01.605 "subtype": "Discovery", 00:15:01.605 "listen_addresses": [], 00:15:01.605 "allow_any_host": true, 00:15:01.605 "hosts": [] 00:15:01.605 }, 00:15:01.605 { 00:15:01.605 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:01.605 "subtype": "NVMe", 00:15:01.605 "listen_addresses": [ 00:15:01.605 { 00:15:01.605 "trtype": "VFIOUSER", 00:15:01.605 "adrfam": "IPv4", 00:15:01.605 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:01.605 "trsvcid": "0" 00:15:01.605 } 00:15:01.605 ], 00:15:01.605 "allow_any_host": true, 00:15:01.605 "hosts": [], 00:15:01.605 "serial_number": "SPDK1", 00:15:01.605 "model_number": "SPDK bdev Controller", 00:15:01.605 "max_namespaces": 32, 00:15:01.605 "min_cntlid": 1, 00:15:01.605 "max_cntlid": 65519, 00:15:01.605 "namespaces": [ 00:15:01.605 { 00:15:01.605 "nsid": 1, 00:15:01.605 "bdev_name": "Malloc1", 00:15:01.605 "name": "Malloc1", 00:15:01.605 "nguid": "58F2B1902E6246DFB19FF8541E42427D", 00:15:01.605 "uuid": "58f2b190-2e62-46df-b19f-f8541e42427d" 00:15:01.605 }, 00:15:01.605 { 00:15:01.605 "nsid": 2, 00:15:01.605 "bdev_name": "Malloc3", 00:15:01.605 "name": "Malloc3", 00:15:01.605 "nguid": "A422C45099A442F8BD81377534797E0A", 00:15:01.605 "uuid": "a422c450-99a4-42f8-bd81-377534797e0a" 00:15:01.605 } 00:15:01.605 ] 00:15:01.605 }, 00:15:01.605 { 00:15:01.605 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:01.605 "subtype": "NVMe", 00:15:01.605 "listen_addresses": [ 00:15:01.605 { 00:15:01.605 "trtype": "VFIOUSER", 00:15:01.605 "adrfam": "IPv4", 00:15:01.605 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:01.605 "trsvcid": "0" 00:15:01.605 } 00:15:01.605 ], 00:15:01.605 "allow_any_host": true, 00:15:01.605 "hosts": [], 00:15:01.605 "serial_number": "SPDK2", 00:15:01.605 "model_number": "SPDK bdev Controller", 00:15:01.605 "max_namespaces": 32, 00:15:01.605 "min_cntlid": 1, 00:15:01.605 "max_cntlid": 65519, 00:15:01.605 "namespaces": [ 00:15:01.605 { 00:15:01.605 "nsid": 1, 00:15:01.605 "bdev_name": "Malloc2", 00:15:01.605 "name": "Malloc2", 00:15:01.605 "nguid": "2268B3510C6049119546BE09EF64376C", 00:15:01.605 "uuid": "2268b351-0c60-4911-9546-be09ef64376c" 00:15:01.605 } 00:15:01.605 ] 00:15:01.605 } 00:15:01.605 ] 00:15:01.605 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3730993 00:15:01.605 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:01.605 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:01.605 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:01.605 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:01.605 [2024-11-06 15:27:19.448988] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:15:01.605 [2024-11-06 15:27:19.449032] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731148 ] 00:15:01.605 [2024-11-06 15:27:19.486975] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:01.605 [2024-11-06 15:27:19.495918] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:01.605 [2024-11-06 15:27:19.495937] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0814085000 00:15:01.605 [2024-11-06 15:27:19.496920] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.605 [2024-11-06 15:27:19.497925] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.605 [2024-11-06 15:27:19.498931] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.605 [2024-11-06 15:27:19.499942] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:01.605 [2024-11-06 15:27:19.500947] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:01.605 [2024-11-06 15:27:19.501954] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.605 [2024-11-06 15:27:19.502961] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:01.605 [2024-11-06 15:27:19.503970] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.605 [2024-11-06 15:27:19.504974] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:01.605 [2024-11-06 15:27:19.504982] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f081407a000 00:15:01.605 [2024-11-06 15:27:19.505893] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:01.605 [2024-11-06 15:27:19.515261] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:01.605 [2024-11-06 15:27:19.515280] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:01.605 [2024-11-06 15:27:19.520350] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:01.605 [2024-11-06 15:27:19.520386] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:01.605 [2024-11-06 15:27:19.520446] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:01.605 [2024-11-06 15:27:19.520456] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:01.605 [2024-11-06 15:27:19.520460] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:01.605 [2024-11-06 15:27:19.521353] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:01.605 [2024-11-06 15:27:19.521361] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:01.605 [2024-11-06 15:27:19.521366] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:01.605 [2024-11-06 15:27:19.522356] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:01.605 [2024-11-06 15:27:19.522362] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:01.606 [2024-11-06 15:27:19.522368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:01.606 [2024-11-06 15:27:19.523361] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:01.606 [2024-11-06 15:27:19.523368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:01.606 [2024-11-06 15:27:19.524364] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:01.606 [2024-11-06 15:27:19.524371] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:01.606 [2024-11-06 15:27:19.524377] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:01.606 [2024-11-06 15:27:19.524382] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:01.606 [2024-11-06 15:27:19.524487] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:01.606 [2024-11-06 15:27:19.524490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:01.606 [2024-11-06 15:27:19.524494] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:01.606 [2024-11-06 15:27:19.525368] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:01.606 [2024-11-06 15:27:19.526377] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:01.606 [2024-11-06 15:27:19.527388] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:01.606 [2024-11-06 15:27:19.528387] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:01.606 [2024-11-06 15:27:19.528420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:01.606 [2024-11-06 15:27:19.529395] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:01.606 [2024-11-06 15:27:19.529401] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:01.606 [2024-11-06 15:27:19.529405] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:01.606 [2024-11-06 15:27:19.529420] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:01.606 [2024-11-06 15:27:19.529425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:01.606 [2024-11-06 15:27:19.529434] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:01.606 [2024-11-06 15:27:19.529438] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.606 [2024-11-06 15:27:19.529440] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.606 [2024-11-06 15:27:19.529449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.606 [2024-11-06 15:27:19.536750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:01.606 [2024-11-06 15:27:19.536759] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:01.606 [2024-11-06 15:27:19.536762] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:01.606 [2024-11-06 15:27:19.536765] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:01.606 [2024-11-06 15:27:19.536769] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:01.606 [2024-11-06 15:27:19.536774] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:01.606 [2024-11-06 15:27:19.536778] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:01.606 [2024-11-06 15:27:19.536782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:01.606 [2024-11-06 15:27:19.536790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:01.606 [2024-11-06 15:27:19.536797] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:01.606 [2024-11-06 15:27:19.544750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:01.606 [2024-11-06 15:27:19.544760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.606 [2024-11-06 15:27:19.544766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.606 [2024-11-06 15:27:19.544772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.606 [2024-11-06 15:27:19.544778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.606 [2024-11-06 15:27:19.544782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:01.606 [2024-11-06 15:27:19.544787] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:01.606 [2024-11-06 15:27:19.544793] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:01.606 [2024-11-06 15:27:19.552749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:01.606 [2024-11-06 15:27:19.552756] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:01.606 [2024-11-06 15:27:19.552760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:01.606 [2024-11-06 15:27:19.552765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:01.606 [2024-11-06 15:27:19.552769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:01.606 [2024-11-06 15:27:19.552776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:01.606 [2024-11-06 15:27:19.560749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:01.606 [2024-11-06 15:27:19.560795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:01.606 [2024-11-06 15:27:19.560801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:01.606 [2024-11-06 15:27:19.560806] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:01.606 [2024-11-06 15:27:19.560809] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:01.606 [2024-11-06 15:27:19.560812] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.606 [2024-11-06 15:27:19.560816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:01.606 [2024-11-06 15:27:19.568748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:01.606 [2024-11-06 15:27:19.568756] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:01.606 [2024-11-06 15:27:19.568767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:01.606 [2024-11-06 15:27:19.568773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:01.606 [2024-11-06 15:27:19.568778] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:01.606 [2024-11-06 15:27:19.568782] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.606 [2024-11-06 15:27:19.568784] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.606 [2024-11-06 15:27:19.568788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.606 [2024-11-06 15:27:19.576749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:01.606 [2024-11-06 15:27:19.576759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:01.606 [2024-11-06 15:27:19.576765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:01.606 [2024-11-06 15:27:19.576770] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:01.606 [2024-11-06 15:27:19.576773] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.606 [2024-11-06 15:27:19.576776] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.606 [2024-11-06 15:27:19.576780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.607 [2024-11-06 15:27:19.584750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:01.607 [2024-11-06 15:27:19.584758] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:01.607 [2024-11-06 15:27:19.584762] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:01.607 [2024-11-06 15:27:19.584768] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:01.607 [2024-11-06 15:27:19.584773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:01.607 [2024-11-06 15:27:19.584776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:01.607 [2024-11-06 15:27:19.584780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:01.607 [2024-11-06 15:27:19.584784] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:01.607 [2024-11-06 15:27:19.584787] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:01.607 [2024-11-06 15:27:19.584791] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:01.607 [2024-11-06 15:27:19.584803] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:01.868 [2024-11-06 15:27:19.592751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:01.868 [2024-11-06 15:27:19.592762] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:01.868 [2024-11-06 15:27:19.600751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:01.869 [2024-11-06 15:27:19.600761] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:01.869 [2024-11-06 15:27:19.608750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:01.869 [2024-11-06 15:27:19.608760] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:01.869 [2024-11-06 15:27:19.616750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:01.869 [2024-11-06 15:27:19.616762] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:01.869 [2024-11-06 15:27:19.616765] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:01.869 [2024-11-06 15:27:19.616768] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:01.869 [2024-11-06 15:27:19.616771] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:01.869 [2024-11-06 15:27:19.616773] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:01.869 [2024-11-06 15:27:19.616778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:01.869 [2024-11-06 15:27:19.616783] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:01.869 [2024-11-06 15:27:19.616786] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:01.869 [2024-11-06 15:27:19.616789] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.869 [2024-11-06 15:27:19.616793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:01.869 [2024-11-06 15:27:19.616799] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:01.869 [2024-11-06 15:27:19.616802] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.869 [2024-11-06 15:27:19.616804] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.869 [2024-11-06 15:27:19.616809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.869 [2024-11-06 15:27:19.616814] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:01.869 [2024-11-06 15:27:19.616818] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:01.869 [2024-11-06 15:27:19.616820] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.869 [2024-11-06 15:27:19.616824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:01.869 [2024-11-06 15:27:19.624751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:01.869 [2024-11-06 15:27:19.624761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:01.869 [2024-11-06 15:27:19.624769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:01.869 [2024-11-06 15:27:19.624776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:01.869 ===================================================== 00:15:01.869 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:01.869 ===================================================== 00:15:01.869 Controller Capabilities/Features 00:15:01.869 ================================ 00:15:01.869 Vendor ID: 4e58 00:15:01.869 Subsystem Vendor ID: 4e58 00:15:01.869 Serial Number: SPDK2 00:15:01.869 Model Number: SPDK bdev Controller 00:15:01.869 Firmware Version: 25.01 00:15:01.869 Recommended Arb Burst: 6 00:15:01.869 IEEE OUI Identifier: 8d 6b 50 00:15:01.869 Multi-path I/O 00:15:01.869 May have multiple subsystem ports: Yes 00:15:01.869 May have multiple controllers: Yes 00:15:01.869 Associated with SR-IOV VF: No 00:15:01.869 Max Data Transfer Size: 131072 00:15:01.869 Max Number of Namespaces: 32 00:15:01.869 Max Number of I/O Queues: 127 00:15:01.869 NVMe Specification Version (VS): 1.3 00:15:01.869 NVMe Specification Version (Identify): 1.3 00:15:01.869 Maximum Queue Entries: 256 00:15:01.869 Contiguous Queues Required: Yes 00:15:01.869 Arbitration Mechanisms Supported 00:15:01.869 Weighted Round Robin: Not Supported 00:15:01.869 Vendor Specific: Not Supported 00:15:01.869 Reset Timeout: 15000 ms 00:15:01.869 Doorbell Stride: 4 bytes 00:15:01.869 NVM Subsystem Reset: Not Supported 00:15:01.869 Command Sets Supported 00:15:01.869 NVM Command Set: Supported 00:15:01.869 Boot Partition: Not Supported 00:15:01.869 Memory Page Size Minimum: 4096 bytes 00:15:01.869 Memory Page Size Maximum: 4096 bytes 00:15:01.869 Persistent Memory Region: Not Supported 00:15:01.869 Optional Asynchronous Events Supported 00:15:01.869 Namespace Attribute Notices: Supported 00:15:01.869 Firmware Activation Notices: Not Supported 00:15:01.869 ANA Change Notices: Not Supported 00:15:01.869 PLE Aggregate Log Change Notices: Not Supported 00:15:01.869 LBA Status Info Alert Notices: Not Supported 00:15:01.869 EGE Aggregate Log Change Notices: Not Supported 00:15:01.869 Normal NVM Subsystem Shutdown event: Not Supported 00:15:01.869 Zone Descriptor Change Notices: Not Supported 00:15:01.869 Discovery Log Change Notices: Not Supported 00:15:01.869 Controller Attributes 00:15:01.869 128-bit Host Identifier: Supported 00:15:01.869 Non-Operational Permissive Mode: Not Supported 00:15:01.869 NVM Sets: Not Supported 00:15:01.869 Read Recovery Levels: Not Supported 00:15:01.869 Endurance Groups: Not Supported 00:15:01.869 Predictable Latency Mode: Not Supported 00:15:01.869 Traffic Based Keep ALive: Not Supported 00:15:01.869 Namespace Granularity: Not Supported 00:15:01.869 SQ Associations: Not Supported 00:15:01.869 UUID List: Not Supported 00:15:01.869 Multi-Domain Subsystem: Not Supported 00:15:01.869 Fixed Capacity Management: Not Supported 00:15:01.869 Variable Capacity Management: Not Supported 00:15:01.869 Delete Endurance Group: Not Supported 00:15:01.869 Delete NVM Set: Not Supported 00:15:01.869 Extended LBA Formats Supported: Not Supported 00:15:01.869 Flexible Data Placement Supported: Not Supported 00:15:01.869 00:15:01.869 Controller Memory Buffer Support 00:15:01.869 ================================ 00:15:01.869 Supported: No 00:15:01.869 00:15:01.869 Persistent Memory Region Support 00:15:01.869 ================================ 00:15:01.869 Supported: No 00:15:01.869 00:15:01.869 Admin Command Set Attributes 00:15:01.869 ============================ 00:15:01.869 Security Send/Receive: Not Supported 00:15:01.869 Format NVM: Not Supported 00:15:01.869 Firmware Activate/Download: Not Supported 00:15:01.869 Namespace Management: Not Supported 00:15:01.869 Device Self-Test: Not Supported 00:15:01.869 Directives: Not Supported 00:15:01.869 NVMe-MI: Not Supported 00:15:01.869 Virtualization Management: Not Supported 00:15:01.869 Doorbell Buffer Config: Not Supported 00:15:01.869 Get LBA Status Capability: Not Supported 00:15:01.869 Command & Feature Lockdown Capability: Not Supported 00:15:01.869 Abort Command Limit: 4 00:15:01.869 Async Event Request Limit: 4 00:15:01.869 Number of Firmware Slots: N/A 00:15:01.869 Firmware Slot 1 Read-Only: N/A 00:15:01.869 Firmware Activation Without Reset: N/A 00:15:01.869 Multiple Update Detection Support: N/A 00:15:01.869 Firmware Update Granularity: No Information Provided 00:15:01.869 Per-Namespace SMART Log: No 00:15:01.869 Asymmetric Namespace Access Log Page: Not Supported 00:15:01.869 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:01.869 Command Effects Log Page: Supported 00:15:01.869 Get Log Page Extended Data: Supported 00:15:01.869 Telemetry Log Pages: Not Supported 00:15:01.869 Persistent Event Log Pages: Not Supported 00:15:01.869 Supported Log Pages Log Page: May Support 00:15:01.869 Commands Supported & Effects Log Page: Not Supported 00:15:01.869 Feature Identifiers & Effects Log Page:May Support 00:15:01.869 NVMe-MI Commands & Effects Log Page: May Support 00:15:01.869 Data Area 4 for Telemetry Log: Not Supported 00:15:01.869 Error Log Page Entries Supported: 128 00:15:01.869 Keep Alive: Supported 00:15:01.869 Keep Alive Granularity: 10000 ms 00:15:01.869 00:15:01.869 NVM Command Set Attributes 00:15:01.869 ========================== 00:15:01.869 Submission Queue Entry Size 00:15:01.869 Max: 64 00:15:01.869 Min: 64 00:15:01.869 Completion Queue Entry Size 00:15:01.869 Max: 16 00:15:01.869 Min: 16 00:15:01.869 Number of Namespaces: 32 00:15:01.869 Compare Command: Supported 00:15:01.869 Write Uncorrectable Command: Not Supported 00:15:01.869 Dataset Management Command: Supported 00:15:01.869 Write Zeroes Command: Supported 00:15:01.869 Set Features Save Field: Not Supported 00:15:01.869 Reservations: Not Supported 00:15:01.869 Timestamp: Not Supported 00:15:01.869 Copy: Supported 00:15:01.869 Volatile Write Cache: Present 00:15:01.869 Atomic Write Unit (Normal): 1 00:15:01.869 Atomic Write Unit (PFail): 1 00:15:01.869 Atomic Compare & Write Unit: 1 00:15:01.869 Fused Compare & Write: Supported 00:15:01.869 Scatter-Gather List 00:15:01.869 SGL Command Set: Supported (Dword aligned) 00:15:01.869 SGL Keyed: Not Supported 00:15:01.869 SGL Bit Bucket Descriptor: Not Supported 00:15:01.869 SGL Metadata Pointer: Not Supported 00:15:01.869 Oversized SGL: Not Supported 00:15:01.869 SGL Metadata Address: Not Supported 00:15:01.869 SGL Offset: Not Supported 00:15:01.869 Transport SGL Data Block: Not Supported 00:15:01.869 Replay Protected Memory Block: Not Supported 00:15:01.869 00:15:01.869 Firmware Slot Information 00:15:01.869 ========================= 00:15:01.869 Active slot: 1 00:15:01.869 Slot 1 Firmware Revision: 25.01 00:15:01.869 00:15:01.869 00:15:01.869 Commands Supported and Effects 00:15:01.869 ============================== 00:15:01.869 Admin Commands 00:15:01.869 -------------- 00:15:01.869 Get Log Page (02h): Supported 00:15:01.869 Identify (06h): Supported 00:15:01.869 Abort (08h): Supported 00:15:01.869 Set Features (09h): Supported 00:15:01.869 Get Features (0Ah): Supported 00:15:01.869 Asynchronous Event Request (0Ch): Supported 00:15:01.869 Keep Alive (18h): Supported 00:15:01.869 I/O Commands 00:15:01.869 ------------ 00:15:01.869 Flush (00h): Supported LBA-Change 00:15:01.869 Write (01h): Supported LBA-Change 00:15:01.869 Read (02h): Supported 00:15:01.869 Compare (05h): Supported 00:15:01.869 Write Zeroes (08h): Supported LBA-Change 00:15:01.869 Dataset Management (09h): Supported LBA-Change 00:15:01.869 Copy (19h): Supported LBA-Change 00:15:01.869 00:15:01.869 Error Log 00:15:01.869 ========= 00:15:01.869 00:15:01.869 Arbitration 00:15:01.869 =========== 00:15:01.870 Arbitration Burst: 1 00:15:01.870 00:15:01.870 Power Management 00:15:01.870 ================ 00:15:01.870 Number of Power States: 1 00:15:01.870 Current Power State: Power State #0 00:15:01.870 Power State #0: 00:15:01.870 Max Power: 0.00 W 00:15:01.870 Non-Operational State: Operational 00:15:01.870 Entry Latency: Not Reported 00:15:01.870 Exit Latency: Not Reported 00:15:01.870 Relative Read Throughput: 0 00:15:01.870 Relative Read Latency: 0 00:15:01.870 Relative Write Throughput: 0 00:15:01.870 Relative Write Latency: 0 00:15:01.870 Idle Power: Not Reported 00:15:01.870 Active Power: Not Reported 00:15:01.870 Non-Operational Permissive Mode: Not Supported 00:15:01.870 00:15:01.870 Health Information 00:15:01.870 ================== 00:15:01.870 Critical Warnings: 00:15:01.870 Available Spare Space: OK 00:15:01.870 Temperature: OK 00:15:01.870 Device Reliability: OK 00:15:01.870 Read Only: No 00:15:01.870 Volatile Memory Backup: OK 00:15:01.870 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:01.870 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:01.870 Available Spare: 0% 00:15:01.870 Available Sp[2024-11-06 15:27:19.624852] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:01.870 [2024-11-06 15:27:19.632751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:01.870 [2024-11-06 15:27:19.632773] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:01.870 [2024-11-06 15:27:19.632780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.870 [2024-11-06 15:27:19.632785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.870 [2024-11-06 15:27:19.632789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.870 [2024-11-06 15:27:19.632793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.870 [2024-11-06 15:27:19.632827] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:01.870 [2024-11-06 15:27:19.632835] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:01.870 [2024-11-06 15:27:19.633835] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:01.870 [2024-11-06 15:27:19.633872] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:01.870 [2024-11-06 15:27:19.633877] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:01.870 [2024-11-06 15:27:19.634835] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:01.870 [2024-11-06 15:27:19.634844] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:01.870 [2024-11-06 15:27:19.634885] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:01.870 [2024-11-06 15:27:19.635853] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:01.870 are Threshold: 0% 00:15:01.870 Life Percentage Used: 0% 00:15:01.870 Data Units Read: 0 00:15:01.870 Data Units Written: 0 00:15:01.870 Host Read Commands: 0 00:15:01.870 Host Write Commands: 0 00:15:01.870 Controller Busy Time: 0 minutes 00:15:01.870 Power Cycles: 0 00:15:01.870 Power On Hours: 0 hours 00:15:01.870 Unsafe Shutdowns: 0 00:15:01.870 Unrecoverable Media Errors: 0 00:15:01.870 Lifetime Error Log Entries: 0 00:15:01.870 Warning Temperature Time: 0 minutes 00:15:01.870 Critical Temperature Time: 0 minutes 00:15:01.870 00:15:01.870 Number of Queues 00:15:01.870 ================ 00:15:01.870 Number of I/O Submission Queues: 127 00:15:01.870 Number of I/O Completion Queues: 127 00:15:01.870 00:15:01.870 Active Namespaces 00:15:01.870 ================= 00:15:01.870 Namespace ID:1 00:15:01.870 Error Recovery Timeout: Unlimited 00:15:01.870 Command Set Identifier: NVM (00h) 00:15:01.870 Deallocate: Supported 00:15:01.870 Deallocated/Unwritten Error: Not Supported 00:15:01.870 Deallocated Read Value: Unknown 00:15:01.870 Deallocate in Write Zeroes: Not Supported 00:15:01.870 Deallocated Guard Field: 0xFFFF 00:15:01.870 Flush: Supported 00:15:01.870 Reservation: Supported 00:15:01.870 Namespace Sharing Capabilities: Multiple Controllers 00:15:01.870 Size (in LBAs): 131072 (0GiB) 00:15:01.870 Capacity (in LBAs): 131072 (0GiB) 00:15:01.870 Utilization (in LBAs): 131072 (0GiB) 00:15:01.870 NGUID: 2268B3510C6049119546BE09EF64376C 00:15:01.870 UUID: 2268b351-0c60-4911-9546-be09ef64376c 00:15:01.870 Thin Provisioning: Not Supported 00:15:01.870 Per-NS Atomic Units: Yes 00:15:01.870 Atomic Boundary Size (Normal): 0 00:15:01.870 Atomic Boundary Size (PFail): 0 00:15:01.870 Atomic Boundary Offset: 0 00:15:01.870 Maximum Single Source Range Length: 65535 00:15:01.870 Maximum Copy Length: 65535 00:15:01.870 Maximum Source Range Count: 1 00:15:01.870 NGUID/EUI64 Never Reused: No 00:15:01.870 Namespace Write Protected: No 00:15:01.870 Number of LBA Formats: 1 00:15:01.870 Current LBA Format: LBA Format #00 00:15:01.870 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:01.870 00:15:01.870 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:01.870 [2024-11-06 15:27:19.827138] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:07.159 Initializing NVMe Controllers 00:15:07.159 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:07.159 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:07.159 Initialization complete. Launching workers. 00:15:07.159 ======================================================== 00:15:07.159 Latency(us) 00:15:07.159 Device Information : IOPS MiB/s Average min max 00:15:07.159 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40122.00 156.73 3192.65 843.24 6803.43 00:15:07.159 ======================================================== 00:15:07.159 Total : 40122.00 156.73 3192.65 843.24 6803.43 00:15:07.159 00:15:07.159 [2024-11-06 15:27:24.931948] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:07.159 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:07.159 [2024-11-06 15:27:25.123543] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:12.446 Initializing NVMe Controllers 00:15:12.446 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:12.446 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:12.446 Initialization complete. Launching workers. 00:15:12.446 ======================================================== 00:15:12.446 Latency(us) 00:15:12.446 Device Information : IOPS MiB/s Average min max 00:15:12.446 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40005.18 156.27 3200.07 857.75 9744.31 00:15:12.446 ======================================================== 00:15:12.446 Total : 40005.18 156.27 3200.07 857.75 9744.31 00:15:12.446 00:15:12.446 [2024-11-06 15:27:30.140465] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:12.446 15:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:12.446 [2024-11-06 15:27:30.353673] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:17.741 [2024-11-06 15:27:35.472830] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:17.741 Initializing NVMe Controllers 00:15:17.741 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:17.741 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:17.741 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:17.741 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:17.741 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:17.741 Initialization complete. Launching workers. 00:15:17.741 Starting thread on core 2 00:15:17.741 Starting thread on core 3 00:15:17.741 Starting thread on core 1 00:15:17.741 15:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:17.741 [2024-11-06 15:27:35.722108] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.041 [2024-11-06 15:27:38.797878] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:21.041 Initializing NVMe Controllers 00:15:21.041 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.041 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.041 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:21.041 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:21.041 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:21.041 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:21.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:21.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:21.041 Initialization complete. Launching workers. 00:15:21.041 Starting thread on core 1 with urgent priority queue 00:15:21.041 Starting thread on core 2 with urgent priority queue 00:15:21.041 Starting thread on core 3 with urgent priority queue 00:15:21.041 Starting thread on core 0 with urgent priority queue 00:15:21.041 SPDK bdev Controller (SPDK2 ) core 0: 13678.33 IO/s 7.31 secs/100000 ios 00:15:21.041 SPDK bdev Controller (SPDK2 ) core 1: 11735.00 IO/s 8.52 secs/100000 ios 00:15:21.041 SPDK bdev Controller (SPDK2 ) core 2: 12028.67 IO/s 8.31 secs/100000 ios 00:15:21.041 SPDK bdev Controller (SPDK2 ) core 3: 12422.33 IO/s 8.05 secs/100000 ios 00:15:21.041 ======================================================== 00:15:21.041 00:15:21.041 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:21.302 [2024-11-06 15:27:39.034132] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.302 Initializing NVMe Controllers 00:15:21.302 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.302 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.302 Namespace ID: 1 size: 0GB 00:15:21.302 Initialization complete. 00:15:21.302 INFO: using host memory buffer for IO 00:15:21.302 Hello world! 00:15:21.302 [2024-11-06 15:27:39.044204] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:21.302 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:21.563 [2024-11-06 15:27:39.284133] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:22.511 Initializing NVMe Controllers 00:15:22.511 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.511 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.511 Initialization complete. Launching workers. 00:15:22.511 submit (in ns) avg, min, max = 6837.7, 2820.0, 3999764.2 00:15:22.511 complete (in ns) avg, min, max = 16208.5, 1632.5, 4000729.2 00:15:22.511 00:15:22.511 Submit histogram 00:15:22.511 ================ 00:15:22.511 Range in us Cumulative Count 00:15:22.511 2.813 - 2.827: 0.3483% ( 72) 00:15:22.511 2.827 - 2.840: 1.7704% ( 294) 00:15:22.511 2.840 - 2.853: 4.3825% ( 540) 00:15:22.511 2.853 - 2.867: 9.0166% ( 958) 00:15:22.511 2.867 - 2.880: 13.3798% ( 902) 00:15:22.511 2.880 - 2.893: 18.6282% ( 1085) 00:15:22.511 2.893 - 2.907: 23.5525% ( 1018) 00:15:22.511 2.907 - 2.920: 28.7138% ( 1067) 00:15:22.511 2.920 - 2.933: 34.6684% ( 1231) 00:15:22.511 2.933 - 2.947: 40.0184% ( 1106) 00:15:22.511 2.947 - 2.960: 45.6634% ( 1167) 00:15:22.511 2.960 - 2.973: 51.6132% ( 1230) 00:15:22.511 2.973 - 2.987: 59.6914% ( 1670) 00:15:22.511 2.987 - 3.000: 68.6403% ( 1850) 00:15:22.511 3.000 - 3.013: 77.0570% ( 1740) 00:15:22.511 3.013 - 3.027: 84.2306% ( 1483) 00:15:22.511 3.027 - 3.040: 89.3726% ( 1063) 00:15:22.511 3.040 - 3.053: 93.7987% ( 915) 00:15:22.511 3.053 - 3.067: 96.5317% ( 565) 00:15:22.511 3.067 - 3.080: 98.0458% ( 313) 00:15:22.511 3.080 - 3.093: 98.8197% ( 160) 00:15:22.511 3.093 - 3.107: 99.1922% ( 77) 00:15:22.511 3.107 - 3.120: 99.3325% ( 29) 00:15:22.511 3.120 - 3.133: 99.3760% ( 9) 00:15:22.511 3.133 - 3.147: 99.4050% ( 6) 00:15:22.511 3.147 - 3.160: 99.4389% ( 7) 00:15:22.511 3.160 - 3.173: 99.4534% ( 3) 00:15:22.511 3.173 - 3.187: 99.4631% ( 2) 00:15:22.511 3.213 - 3.227: 99.4776% ( 3) 00:15:22.511 3.240 - 3.253: 99.4921% ( 3) 00:15:22.511 3.253 - 3.267: 99.4969% ( 1) 00:15:22.511 3.280 - 3.293: 99.5018% ( 1) 00:15:22.512 3.293 - 3.307: 99.5066% ( 1) 00:15:22.512 3.320 - 3.333: 99.5163% ( 2) 00:15:22.512 3.347 - 3.360: 99.5211% ( 1) 00:15:22.512 3.360 - 3.373: 99.5260% ( 1) 00:15:22.512 3.413 - 3.440: 99.5308% ( 1) 00:15:22.512 3.467 - 3.493: 99.5356% ( 1) 00:15:22.512 3.493 - 3.520: 99.5405% ( 1) 00:15:22.512 3.520 - 3.547: 99.5453% ( 1) 00:15:22.512 3.600 - 3.627: 99.5501% ( 1) 00:15:22.512 3.680 - 3.707: 99.5550% ( 1) 00:15:22.512 3.787 - 3.813: 99.5598% ( 1) 00:15:22.512 4.133 - 4.160: 99.5646% ( 1) 00:15:22.512 4.267 - 4.293: 99.5695% ( 1) 00:15:22.512 4.480 - 4.507: 99.5743% ( 1) 00:15:22.512 4.747 - 4.773: 99.5792% ( 1) 00:15:22.512 4.880 - 4.907: 99.5840% ( 1) 00:15:22.512 4.987 - 5.013: 99.5888% ( 1) 00:15:22.512 5.013 - 5.040: 99.5985% ( 2) 00:15:22.512 5.120 - 5.147: 99.6082% ( 2) 00:15:22.512 5.147 - 5.173: 99.6179% ( 2) 00:15:22.512 5.173 - 5.200: 99.6227% ( 1) 00:15:22.512 5.227 - 5.253: 99.6275% ( 1) 00:15:22.512 5.467 - 5.493: 99.6324% ( 1) 00:15:22.512 5.653 - 5.680: 99.6372% ( 1) 00:15:22.512 5.707 - 5.733: 99.6420% ( 1) 00:15:22.512 5.733 - 5.760: 99.6469% ( 1) 00:15:22.512 5.920 - 5.947: 99.6566% ( 2) 00:15:22.512 5.973 - 6.000: 99.6614% ( 1) 00:15:22.512 6.000 - 6.027: 99.6662% ( 1) 00:15:22.512 6.027 - 6.053: 99.6711% ( 1) 00:15:22.512 6.053 - 6.080: 99.6807% ( 2) 00:15:22.512 6.080 - 6.107: 99.6904% ( 2) 00:15:22.512 6.107 - 6.133: 99.6953% ( 1) 00:15:22.512 6.160 - 6.187: 99.7001% ( 1) 00:15:22.512 6.187 - 6.213: 99.7098% ( 2) 00:15:22.512 6.213 - 6.240: 99.7146% ( 1) 00:15:22.512 6.240 - 6.267: 99.7194% ( 1) 00:15:22.512 6.293 - 6.320: 99.7291% ( 2) 00:15:22.512 6.347 - 6.373: 99.7485% ( 4) 00:15:22.512 6.373 - 6.400: 99.7533% ( 1) 00:15:22.512 6.400 - 6.427: 99.7581% ( 1) 00:15:22.512 6.427 - 6.453: 99.7630% ( 1) 00:15:22.512 6.480 - 6.507: 99.7678% ( 1) 00:15:22.512 6.507 - 6.533: 99.7775% ( 2) 00:15:22.512 6.587 - 6.613: 99.7823% ( 1) 00:15:22.512 6.693 - 6.720: 99.7920% ( 2) 00:15:22.512 6.747 - 6.773: 99.8065% ( 3) 00:15:22.512 [2024-11-06 15:27:40.386314] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:22.512 6.773 - 6.800: 99.8113% ( 1) 00:15:22.512 6.800 - 6.827: 99.8210% ( 2) 00:15:22.512 6.880 - 6.933: 99.8259% ( 1) 00:15:22.512 6.987 - 7.040: 99.8307% ( 1) 00:15:22.512 7.093 - 7.147: 99.8355% ( 1) 00:15:22.512 7.147 - 7.200: 99.8452% ( 2) 00:15:22.512 7.253 - 7.307: 99.8597% ( 3) 00:15:22.512 7.307 - 7.360: 99.8646% ( 1) 00:15:22.512 7.467 - 7.520: 99.8742% ( 2) 00:15:22.512 7.520 - 7.573: 99.8791% ( 1) 00:15:22.512 7.573 - 7.627: 99.8839% ( 1) 00:15:22.512 7.840 - 7.893: 99.8887% ( 1) 00:15:22.512 8.107 - 8.160: 99.8936% ( 1) 00:15:22.512 8.693 - 8.747: 99.8984% ( 1) 00:15:22.512 12.587 - 12.640: 99.9033% ( 1) 00:15:22.512 3986.773 - 4014.080: 100.0000% ( 20) 00:15:22.512 00:15:22.512 Complete histogram 00:15:22.512 ================== 00:15:22.512 Range in us Cumulative Count 00:15:22.512 1.627 - 1.633: 0.0048% ( 1) 00:15:22.512 1.633 - 1.640: 0.2612% ( 53) 00:15:22.512 1.640 - 1.647: 0.8223% ( 116) 00:15:22.512 1.647 - 1.653: 0.8852% ( 13) 00:15:22.512 1.653 - 1.660: 0.9626% ( 16) 00:15:22.512 1.660 - 1.667: 1.0448% ( 17) 00:15:22.512 1.667 - 1.673: 1.0787% ( 7) 00:15:22.512 1.673 - 1.680: 6.2400% ( 1067) 00:15:22.512 1.680 - 1.687: 48.0772% ( 8649) 00:15:22.512 1.687 - 1.693: 53.4804% ( 1117) 00:15:22.512 1.693 - 1.700: 63.3677% ( 2044) 00:15:22.512 1.700 - 1.707: 72.3359% ( 1854) 00:15:22.512 1.707 - 1.720: 80.8156% ( 1753) 00:15:22.512 1.720 - 1.733: 82.9488% ( 441) 00:15:22.512 1.733 - 1.747: 86.5815% ( 751) 00:15:22.512 1.747 - 1.760: 91.5107% ( 1019) 00:15:22.512 1.760 - 1.773: 95.6852% ( 863) 00:15:22.512 1.773 - 1.787: 98.0361% ( 486) 00:15:22.512 1.787 - 1.800: 98.9648% ( 192) 00:15:22.512 1.800 - 1.813: 99.3470% ( 79) 00:15:22.512 1.813 - 1.827: 99.4002% ( 11) 00:15:22.512 1.827 - 1.840: 99.4437% ( 9) 00:15:22.512 1.840 - 1.853: 99.4534% ( 2) 00:15:22.512 1.853 - 1.867: 99.4582% ( 1) 00:15:22.512 1.907 - 1.920: 99.4631% ( 1) 00:15:22.512 1.960 - 1.973: 99.4679% ( 1) 00:15:22.512 1.987 - 2.000: 99.4727% ( 1) 00:15:22.512 2.040 - 2.053: 99.4776% ( 1) 00:15:22.512 4.480 - 4.507: 99.4824% ( 1) 00:15:22.512 4.507 - 4.533: 99.4873% ( 1) 00:15:22.512 4.560 - 4.587: 99.4921% ( 1) 00:15:22.512 4.587 - 4.613: 99.4969% ( 1) 00:15:22.512 4.613 - 4.640: 99.5114% ( 3) 00:15:22.512 4.693 - 4.720: 99.5163% ( 1) 00:15:22.512 4.800 - 4.827: 99.5211% ( 1) 00:15:22.512 4.880 - 4.907: 99.5260% ( 1) 00:15:22.512 4.987 - 5.013: 99.5356% ( 2) 00:15:22.512 5.013 - 5.040: 99.5453% ( 2) 00:15:22.512 5.147 - 5.173: 99.5501% ( 1) 00:15:22.512 5.200 - 5.227: 99.5550% ( 1) 00:15:22.512 5.280 - 5.307: 99.5598% ( 1) 00:15:22.512 5.333 - 5.360: 99.5646% ( 1) 00:15:22.512 5.440 - 5.467: 99.5695% ( 1) 00:15:22.512 5.467 - 5.493: 99.5743% ( 1) 00:15:22.512 5.547 - 5.573: 99.5840% ( 2) 00:15:22.512 5.600 - 5.627: 99.5888% ( 1) 00:15:22.512 5.707 - 5.733: 99.5937% ( 1) 00:15:22.512 5.733 - 5.760: 99.5985% ( 1) 00:15:22.512 5.760 - 5.787: 99.6033% ( 1) 00:15:22.512 5.947 - 5.973: 99.6082% ( 1) 00:15:22.512 6.187 - 6.213: 99.6130% ( 1) 00:15:22.512 6.453 - 6.480: 99.6179% ( 1) 00:15:22.512 6.613 - 6.640: 99.6227% ( 1) 00:15:22.512 6.933 - 6.987: 99.6275% ( 1) 00:15:22.512 12.587 - 12.640: 99.6324% ( 1) 00:15:22.512 123.733 - 124.587: 99.6372% ( 1) 00:15:22.512 3986.773 - 4014.080: 100.0000% ( 75) 00:15:22.512 00:15:22.512 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:22.512 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:22.512 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:22.512 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:22.512 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:22.773 [ 00:15:22.773 { 00:15:22.773 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:22.773 "subtype": "Discovery", 00:15:22.773 "listen_addresses": [], 00:15:22.773 "allow_any_host": true, 00:15:22.773 "hosts": [] 00:15:22.773 }, 00:15:22.773 { 00:15:22.773 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:22.773 "subtype": "NVMe", 00:15:22.773 "listen_addresses": [ 00:15:22.773 { 00:15:22.773 "trtype": "VFIOUSER", 00:15:22.773 "adrfam": "IPv4", 00:15:22.773 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:22.773 "trsvcid": "0" 00:15:22.773 } 00:15:22.773 ], 00:15:22.773 "allow_any_host": true, 00:15:22.773 "hosts": [], 00:15:22.773 "serial_number": "SPDK1", 00:15:22.773 "model_number": "SPDK bdev Controller", 00:15:22.773 "max_namespaces": 32, 00:15:22.773 "min_cntlid": 1, 00:15:22.773 "max_cntlid": 65519, 00:15:22.773 "namespaces": [ 00:15:22.773 { 00:15:22.773 "nsid": 1, 00:15:22.773 "bdev_name": "Malloc1", 00:15:22.773 "name": "Malloc1", 00:15:22.773 "nguid": "58F2B1902E6246DFB19FF8541E42427D", 00:15:22.773 "uuid": "58f2b190-2e62-46df-b19f-f8541e42427d" 00:15:22.773 }, 00:15:22.773 { 00:15:22.773 "nsid": 2, 00:15:22.773 "bdev_name": "Malloc3", 00:15:22.773 "name": "Malloc3", 00:15:22.773 "nguid": "A422C45099A442F8BD81377534797E0A", 00:15:22.773 "uuid": "a422c450-99a4-42f8-bd81-377534797e0a" 00:15:22.773 } 00:15:22.773 ] 00:15:22.773 }, 00:15:22.773 { 00:15:22.773 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:22.773 "subtype": "NVMe", 00:15:22.773 "listen_addresses": [ 00:15:22.773 { 00:15:22.773 "trtype": "VFIOUSER", 00:15:22.773 "adrfam": "IPv4", 00:15:22.773 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:22.773 "trsvcid": "0" 00:15:22.773 } 00:15:22.773 ], 00:15:22.773 "allow_any_host": true, 00:15:22.773 "hosts": [], 00:15:22.773 "serial_number": "SPDK2", 00:15:22.773 "model_number": "SPDK bdev Controller", 00:15:22.773 "max_namespaces": 32, 00:15:22.773 "min_cntlid": 1, 00:15:22.773 "max_cntlid": 65519, 00:15:22.773 "namespaces": [ 00:15:22.773 { 00:15:22.773 "nsid": 1, 00:15:22.773 "bdev_name": "Malloc2", 00:15:22.773 "name": "Malloc2", 00:15:22.773 "nguid": "2268B3510C6049119546BE09EF64376C", 00:15:22.773 "uuid": "2268b351-0c60-4911-9546-be09ef64376c" 00:15:22.773 } 00:15:22.773 ] 00:15:22.773 } 00:15:22.773 ] 00:15:22.773 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:22.773 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:22.773 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3735183 00:15:22.773 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:22.773 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:15:22.773 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:22.773 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:22.773 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:15:22.773 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:22.773 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:23.034 [2024-11-06 15:27:40.763126] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:23.034 Malloc4 00:15:23.034 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:23.034 [2024-11-06 15:27:40.965639] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:23.034 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:23.034 Asynchronous Event Request test 00:15:23.034 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:23.034 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:23.034 Registering asynchronous event callbacks... 00:15:23.034 Starting namespace attribute notice tests for all controllers... 00:15:23.034 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:23.034 aer_cb - Changed Namespace 00:15:23.034 Cleaning up... 00:15:23.295 [ 00:15:23.295 { 00:15:23.295 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:23.295 "subtype": "Discovery", 00:15:23.295 "listen_addresses": [], 00:15:23.295 "allow_any_host": true, 00:15:23.295 "hosts": [] 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:23.295 "subtype": "NVMe", 00:15:23.295 "listen_addresses": [ 00:15:23.295 { 00:15:23.295 "trtype": "VFIOUSER", 00:15:23.295 "adrfam": "IPv4", 00:15:23.295 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:23.295 "trsvcid": "0" 00:15:23.295 } 00:15:23.295 ], 00:15:23.295 "allow_any_host": true, 00:15:23.295 "hosts": [], 00:15:23.295 "serial_number": "SPDK1", 00:15:23.295 "model_number": "SPDK bdev Controller", 00:15:23.295 "max_namespaces": 32, 00:15:23.295 "min_cntlid": 1, 00:15:23.295 "max_cntlid": 65519, 00:15:23.295 "namespaces": [ 00:15:23.295 { 00:15:23.295 "nsid": 1, 00:15:23.295 "bdev_name": "Malloc1", 00:15:23.295 "name": "Malloc1", 00:15:23.295 "nguid": "58F2B1902E6246DFB19FF8541E42427D", 00:15:23.295 "uuid": "58f2b190-2e62-46df-b19f-f8541e42427d" 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "nsid": 2, 00:15:23.295 "bdev_name": "Malloc3", 00:15:23.295 "name": "Malloc3", 00:15:23.295 "nguid": "A422C45099A442F8BD81377534797E0A", 00:15:23.295 "uuid": "a422c450-99a4-42f8-bd81-377534797e0a" 00:15:23.295 } 00:15:23.295 ] 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:23.295 "subtype": "NVMe", 00:15:23.295 "listen_addresses": [ 00:15:23.295 { 00:15:23.295 "trtype": "VFIOUSER", 00:15:23.295 "adrfam": "IPv4", 00:15:23.295 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:23.295 "trsvcid": "0" 00:15:23.295 } 00:15:23.295 ], 00:15:23.295 "allow_any_host": true, 00:15:23.295 "hosts": [], 00:15:23.295 "serial_number": "SPDK2", 00:15:23.295 "model_number": "SPDK bdev Controller", 00:15:23.295 "max_namespaces": 32, 00:15:23.295 "min_cntlid": 1, 00:15:23.295 "max_cntlid": 65519, 00:15:23.295 "namespaces": [ 00:15:23.295 { 00:15:23.295 "nsid": 1, 00:15:23.295 "bdev_name": "Malloc2", 00:15:23.295 "name": "Malloc2", 00:15:23.295 "nguid": "2268B3510C6049119546BE09EF64376C", 00:15:23.295 "uuid": "2268b351-0c60-4911-9546-be09ef64376c" 00:15:23.295 }, 00:15:23.295 { 00:15:23.295 "nsid": 2, 00:15:23.295 "bdev_name": "Malloc4", 00:15:23.295 "name": "Malloc4", 00:15:23.295 "nguid": "0AB1892CF190470B973A662FA8CCF29F", 00:15:23.295 "uuid": "0ab1892c-f190-470b-973a-662fa8ccf29f" 00:15:23.295 } 00:15:23.295 ] 00:15:23.295 } 00:15:23.295 ] 00:15:23.295 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3735183 00:15:23.295 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:23.295 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3725521 00:15:23.295 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 3725521 ']' 00:15:23.295 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 3725521 00:15:23.295 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:15:23.295 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:23.295 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3725521 00:15:23.295 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:23.295 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:23.295 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3725521' 00:15:23.295 killing process with pid 3725521 00:15:23.295 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 3725521 00:15:23.295 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 3725521 00:15:23.556 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:23.556 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:23.556 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:23.556 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:23.556 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:23.556 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3735386 00:15:23.556 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3735386' 00:15:23.556 Process pid: 3735386 00:15:23.556 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:23.556 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:23.556 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3735386 00:15:23.556 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 3735386 ']' 00:15:23.556 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.556 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:23.556 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.557 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:23.557 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:23.557 [2024-11-06 15:27:41.448658] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:23.557 [2024-11-06 15:27:41.449610] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:15:23.557 [2024-11-06 15:27:41.449657] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.557 [2024-11-06 15:27:41.537235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:23.817 [2024-11-06 15:27:41.573833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.817 [2024-11-06 15:27:41.573870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.817 [2024-11-06 15:27:41.573876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.817 [2024-11-06 15:27:41.573881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.817 [2024-11-06 15:27:41.573885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.817 [2024-11-06 15:27:41.575451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.817 [2024-11-06 15:27:41.575607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.817 [2024-11-06 15:27:41.575778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:23.817 [2024-11-06 15:27:41.575780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.817 [2024-11-06 15:27:41.630126] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:23.817 [2024-11-06 15:27:41.631058] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:23.817 [2024-11-06 15:27:41.632033] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:23.817 [2024-11-06 15:27:41.632623] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:23.817 [2024-11-06 15:27:41.632652] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:24.389 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:24.389 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:15:24.389 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:25.331 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:25.591 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:25.592 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:25.592 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:25.592 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:25.592 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:25.853 Malloc1 00:15:25.853 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:26.114 15:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:26.114 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:26.375 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:26.375 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:26.375 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:26.636 Malloc2 00:15:26.636 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:26.636 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:26.898 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:27.159 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:27.159 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3735386 00:15:27.159 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 3735386 ']' 00:15:27.159 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 3735386 00:15:27.159 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:15:27.159 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:27.159 15:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3735386 00:15:27.159 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:27.159 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:27.159 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3735386' 00:15:27.159 killing process with pid 3735386 00:15:27.159 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 3735386 00:15:27.159 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 3735386 00:15:27.420 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:27.420 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:27.420 00:15:27.420 real 0m51.945s 00:15:27.420 user 3m19.245s 00:15:27.420 sys 0m2.684s 00:15:27.420 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:27.420 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:27.420 ************************************ 00:15:27.420 END TEST nvmf_vfio_user 00:15:27.420 ************************************ 00:15:27.420 15:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:27.420 15:27:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:27.420 15:27:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:27.420 15:27:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:27.420 ************************************ 00:15:27.420 START TEST nvmf_vfio_user_nvme_compliance 00:15:27.420 ************************************ 00:15:27.420 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:27.420 * Looking for test storage... 00:15:27.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:27.420 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:27.420 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:15:27.420 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:27.420 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:27.420 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:27.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.683 --rc genhtml_branch_coverage=1 00:15:27.683 --rc genhtml_function_coverage=1 00:15:27.683 --rc genhtml_legend=1 00:15:27.683 --rc geninfo_all_blocks=1 00:15:27.683 --rc geninfo_unexecuted_blocks=1 00:15:27.683 00:15:27.683 ' 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:27.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.683 --rc genhtml_branch_coverage=1 00:15:27.683 --rc genhtml_function_coverage=1 00:15:27.683 --rc genhtml_legend=1 00:15:27.683 --rc geninfo_all_blocks=1 00:15:27.683 --rc geninfo_unexecuted_blocks=1 00:15:27.683 00:15:27.683 ' 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:27.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.683 --rc genhtml_branch_coverage=1 00:15:27.683 --rc genhtml_function_coverage=1 00:15:27.683 --rc genhtml_legend=1 00:15:27.683 --rc geninfo_all_blocks=1 00:15:27.683 --rc geninfo_unexecuted_blocks=1 00:15:27.683 00:15:27.683 ' 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:27.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.683 --rc genhtml_branch_coverage=1 00:15:27.683 --rc genhtml_function_coverage=1 00:15:27.683 --rc genhtml_legend=1 00:15:27.683 --rc geninfo_all_blocks=1 00:15:27.683 --rc geninfo_unexecuted_blocks=1 00:15:27.683 00:15:27.683 ' 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.683 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:27.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3736273 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3736273' 00:15:27.684 Process pid: 3736273 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3736273 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 3736273 ']' 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:27.684 15:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:27.684 [2024-11-06 15:27:45.511851] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:15:27.684 [2024-11-06 15:27:45.511903] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.684 [2024-11-06 15:27:45.591718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:27.684 [2024-11-06 15:27:45.622413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.684 [2024-11-06 15:27:45.622443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.684 [2024-11-06 15:27:45.622450] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:27.684 [2024-11-06 15:27:45.622454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:27.684 [2024-11-06 15:27:45.622458] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.684 [2024-11-06 15:27:45.623592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.684 [2024-11-06 15:27:45.623755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.684 [2024-11-06 15:27:45.623768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:28.626 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:28.626 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:15:28.626 15:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:29.570 malloc0 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.570 15:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:29.570 00:15:29.570 00:15:29.570 CUnit - A unit testing framework for C - Version 2.1-3 00:15:29.570 http://cunit.sourceforge.net/ 00:15:29.570 00:15:29.570 00:15:29.570 Suite: nvme_compliance 00:15:29.570 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-06 15:27:47.547122] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.570 [2024-11-06 15:27:47.548409] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:29.570 [2024-11-06 15:27:47.548421] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:29.570 [2024-11-06 15:27:47.548425] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:29.570 [2024-11-06 15:27:47.551143] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.831 passed 00:15:29.831 Test: admin_identify_ctrlr_verify_fused ...[2024-11-06 15:27:47.629640] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.831 [2024-11-06 15:27:47.632667] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.831 passed 00:15:29.831 Test: admin_identify_ns ...[2024-11-06 15:27:47.708093] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.831 [2024-11-06 15:27:47.767759] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:29.831 [2024-11-06 15:27:47.775755] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:29.831 [2024-11-06 15:27:47.796838] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.092 passed 00:15:30.092 Test: admin_get_features_mandatory_features ...[2024-11-06 15:27:47.871892] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.092 [2024-11-06 15:27:47.874914] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.092 passed 00:15:30.092 Test: admin_get_features_optional_features ...[2024-11-06 15:27:47.953402] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.092 [2024-11-06 15:27:47.956429] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.092 passed 00:15:30.092 Test: admin_set_features_number_of_queues ...[2024-11-06 15:27:48.031189] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.354 [2024-11-06 15:27:48.136846] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.354 passed 00:15:30.354 Test: admin_get_log_page_mandatory_logs ...[2024-11-06 15:27:48.210073] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.354 [2024-11-06 15:27:48.213098] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.354 passed 00:15:30.354 Test: admin_get_log_page_with_lpo ...[2024-11-06 15:27:48.288101] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.614 [2024-11-06 15:27:48.359758] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:30.614 [2024-11-06 15:27:48.372807] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.614 passed 00:15:30.614 Test: fabric_property_get ...[2024-11-06 15:27:48.444025] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.614 [2024-11-06 15:27:48.445218] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:30.614 [2024-11-06 15:27:48.447036] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.614 passed 00:15:30.614 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-06 15:27:48.525546] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.614 [2024-11-06 15:27:48.526752] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:30.614 [2024-11-06 15:27:48.528567] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.614 passed 00:15:30.875 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-06 15:27:48.603271] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.875 [2024-11-06 15:27:48.686750] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:30.875 [2024-11-06 15:27:48.699783] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:30.875 [2024-11-06 15:27:48.704828] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.875 passed 00:15:30.875 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-06 15:27:48.777052] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.875 [2024-11-06 15:27:48.778255] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:30.875 [2024-11-06 15:27:48.781078] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.875 passed 00:15:30.875 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-06 15:27:48.856090] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.136 [2024-11-06 15:27:48.934752] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:31.136 [2024-11-06 15:27:48.958748] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:31.136 [2024-11-06 15:27:48.963821] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.136 passed 00:15:31.136 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-06 15:27:49.039006] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.136 [2024-11-06 15:27:49.040195] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:31.136 [2024-11-06 15:27:49.040213] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:31.136 [2024-11-06 15:27:49.042021] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.136 passed 00:15:31.136 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-06 15:27:49.115096] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.396 [2024-11-06 15:27:49.210753] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:31.396 [2024-11-06 15:27:49.218756] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:31.396 [2024-11-06 15:27:49.226751] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:31.396 [2024-11-06 15:27:49.234752] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:31.396 [2024-11-06 15:27:49.263828] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.396 passed 00:15:31.396 Test: admin_create_io_sq_verify_pc ...[2024-11-06 15:27:49.336002] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.396 [2024-11-06 15:27:49.354758] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:31.396 [2024-11-06 15:27:49.372147] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.657 passed 00:15:31.657 Test: admin_create_io_qp_max_qps ...[2024-11-06 15:27:49.447593] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.598 [2024-11-06 15:27:50.563756] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:33.169 [2024-11-06 15:27:50.954234] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.169 passed 00:15:33.169 Test: admin_create_io_sq_shared_cq ...[2024-11-06 15:27:51.030059] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.429 [2024-11-06 15:27:51.162752] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:33.429 [2024-11-06 15:27:51.199803] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.429 passed 00:15:33.429 00:15:33.429 Run Summary: Type Total Ran Passed Failed Inactive 00:15:33.429 suites 1 1 n/a 0 0 00:15:33.429 tests 18 18 18 0 0 00:15:33.429 asserts 360 360 360 0 n/a 00:15:33.429 00:15:33.429 Elapsed time = 1.501 seconds 00:15:33.429 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3736273 00:15:33.429 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 3736273 ']' 00:15:33.430 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 3736273 00:15:33.430 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:15:33.430 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:33.430 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3736273 00:15:33.430 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:33.430 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:33.430 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3736273' 00:15:33.430 killing process with pid 3736273 00:15:33.430 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 3736273 00:15:33.430 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 3736273 00:15:33.697 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:33.697 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:33.697 00:15:33.697 real 0m6.198s 00:15:33.697 user 0m17.605s 00:15:33.697 sys 0m0.511s 00:15:33.697 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:33.698 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:33.698 ************************************ 00:15:33.698 END TEST nvmf_vfio_user_nvme_compliance 00:15:33.698 ************************************ 00:15:33.698 15:27:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:33.698 15:27:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:33.698 15:27:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:33.698 15:27:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:33.698 ************************************ 00:15:33.698 START TEST nvmf_vfio_user_fuzz 00:15:33.698 ************************************ 00:15:33.698 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:33.698 * Looking for test storage... 00:15:33.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:33.698 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:33.698 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:33.698 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:34.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.051 --rc genhtml_branch_coverage=1 00:15:34.051 --rc genhtml_function_coverage=1 00:15:34.051 --rc genhtml_legend=1 00:15:34.051 --rc geninfo_all_blocks=1 00:15:34.051 --rc geninfo_unexecuted_blocks=1 00:15:34.051 00:15:34.051 ' 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:34.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.051 --rc genhtml_branch_coverage=1 00:15:34.051 --rc genhtml_function_coverage=1 00:15:34.051 --rc genhtml_legend=1 00:15:34.051 --rc geninfo_all_blocks=1 00:15:34.051 --rc geninfo_unexecuted_blocks=1 00:15:34.051 00:15:34.051 ' 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:34.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.051 --rc genhtml_branch_coverage=1 00:15:34.051 --rc genhtml_function_coverage=1 00:15:34.051 --rc genhtml_legend=1 00:15:34.051 --rc geninfo_all_blocks=1 00:15:34.051 --rc geninfo_unexecuted_blocks=1 00:15:34.051 00:15:34.051 ' 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:34.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.051 --rc genhtml_branch_coverage=1 00:15:34.051 --rc genhtml_function_coverage=1 00:15:34.051 --rc genhtml_legend=1 00:15:34.051 --rc geninfo_all_blocks=1 00:15:34.051 --rc geninfo_unexecuted_blocks=1 00:15:34.051 00:15:34.051 ' 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.051 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:34.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3737528 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3737528' 00:15:34.052 Process pid: 3737528 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3737528 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 3737528 ']' 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:15:34.052 15:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:35.021 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:35.021 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.021 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.021 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.021 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:35.021 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:35.021 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.021 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.021 malloc0 00:15:35.022 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.022 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:35.022 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.022 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.283 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.283 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:35.283 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.283 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.283 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.283 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:35.283 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.283 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.283 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.283 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:35.283 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:07.409 Fuzzing completed. Shutting down the fuzz application 00:16:07.409 00:16:07.409 Dumping successful admin opcodes: 00:16:07.409 8, 9, 10, 24, 00:16:07.409 Dumping successful io opcodes: 00:16:07.409 0, 00:16:07.409 NS: 0x20000081ef00 I/O qp, Total commands completed: 1349142, total successful commands: 5294, random_seed: 3222088640 00:16:07.409 NS: 0x20000081ef00 admin qp, Total commands completed: 303617, total successful commands: 2437, random_seed: 689307136 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3737528 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 3737528 ']' 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 3737528 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3737528 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3737528' 00:16:07.409 killing process with pid 3737528 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 3737528 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 3737528 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:07.409 00:16:07.409 real 0m32.197s 00:16:07.409 user 0m37.349s 00:16:07.409 sys 0m23.526s 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:07.409 ************************************ 00:16:07.409 END TEST nvmf_vfio_user_fuzz 00:16:07.409 ************************************ 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:07.409 ************************************ 00:16:07.409 START TEST nvmf_auth_target 00:16:07.409 ************************************ 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:07.409 * Looking for test storage... 00:16:07.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:07.409 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:07.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.409 --rc genhtml_branch_coverage=1 00:16:07.409 --rc genhtml_function_coverage=1 00:16:07.409 --rc genhtml_legend=1 00:16:07.409 --rc geninfo_all_blocks=1 00:16:07.409 --rc geninfo_unexecuted_blocks=1 00:16:07.409 00:16:07.409 ' 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:07.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.410 --rc genhtml_branch_coverage=1 00:16:07.410 --rc genhtml_function_coverage=1 00:16:07.410 --rc genhtml_legend=1 00:16:07.410 --rc geninfo_all_blocks=1 00:16:07.410 --rc geninfo_unexecuted_blocks=1 00:16:07.410 00:16:07.410 ' 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:07.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.410 --rc genhtml_branch_coverage=1 00:16:07.410 --rc genhtml_function_coverage=1 00:16:07.410 --rc genhtml_legend=1 00:16:07.410 --rc geninfo_all_blocks=1 00:16:07.410 --rc geninfo_unexecuted_blocks=1 00:16:07.410 00:16:07.410 ' 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:07.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.410 --rc genhtml_branch_coverage=1 00:16:07.410 --rc genhtml_function_coverage=1 00:16:07.410 --rc genhtml_legend=1 00:16:07.410 --rc geninfo_all_blocks=1 00:16:07.410 --rc geninfo_unexecuted_blocks=1 00:16:07.410 00:16:07.410 ' 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:07.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:07.410 15:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.410 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:07.410 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:07.410 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:07.410 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:14.001 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:14.001 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:14.001 Found net devices under 0000:31:00.0: cvl_0_0 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:14.001 Found net devices under 0000:31:00.1: cvl_0_1 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:14.001 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:14.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:16:14.002 00:16:14.002 --- 10.0.0.2 ping statistics --- 00:16:14.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.002 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:14.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:16:14.002 00:16:14.002 --- 10.0.0.1 ping statistics --- 00:16:14.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.002 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3747379 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3747379 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3747379 ']' 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:14.002 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3747717 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=df61d124bbc968e2a6fe02dabb440bc72a5af2e3d7f94988 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.vpR 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key df61d124bbc968e2a6fe02dabb440bc72a5af2e3d7f94988 0 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 df61d124bbc968e2a6fe02dabb440bc72a5af2e3d7f94988 0 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=df61d124bbc968e2a6fe02dabb440bc72a5af2e3d7f94988 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:14.575 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.vpR 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.vpR 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.vpR 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fe7a44585f6e1025417c65f2d6f351ecd37712521e957277327f5dc784fce228 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.g9Q 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fe7a44585f6e1025417c65f2d6f351ecd37712521e957277327f5dc784fce228 3 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fe7a44585f6e1025417c65f2d6f351ecd37712521e957277327f5dc784fce228 3 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fe7a44585f6e1025417c65f2d6f351ecd37712521e957277327f5dc784fce228 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.g9Q 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.g9Q 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.g9Q 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9ef78e88a892bd1004471856f12fa626 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.c3I 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9ef78e88a892bd1004471856f12fa626 1 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9ef78e88a892bd1004471856f12fa626 1 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9ef78e88a892bd1004471856f12fa626 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.c3I 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.c3I 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.c3I 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b8a42f8c303d919190565585a37da43c08530be7b3822a29 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.mkg 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b8a42f8c303d919190565585a37da43c08530be7b3822a29 2 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b8a42f8c303d919190565585a37da43c08530be7b3822a29 2 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b8a42f8c303d919190565585a37da43c08530be7b3822a29 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.mkg 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.mkg 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.mkg 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ceba1774e623a9cc98876ad75d49e3d4104736fa2282acd5 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.LjT 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ceba1774e623a9cc98876ad75d49e3d4104736fa2282acd5 2 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ceba1774e623a9cc98876ad75d49e3d4104736fa2282acd5 2 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ceba1774e623a9cc98876ad75d49e3d4104736fa2282acd5 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:14.837 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.LjT 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.LjT 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.LjT 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fd3ec96c67a93e9a71bbabd75c5f206b 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.kOA 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fd3ec96c67a93e9a71bbabd75c5f206b 1 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fd3ec96c67a93e9a71bbabd75c5f206b 1 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fd3ec96c67a93e9a71bbabd75c5f206b 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.kOA 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.kOA 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.kOA 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4a0d1cbffdc0f07422057c29dbc4a34f0c1a91476f901c47c23e3b60c3923896 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.SHS 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4a0d1cbffdc0f07422057c29dbc4a34f0c1a91476f901c47c23e3b60c3923896 3 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4a0d1cbffdc0f07422057c29dbc4a34f0c1a91476f901c47c23e3b60c3923896 3 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4a0d1cbffdc0f07422057c29dbc4a34f0c1a91476f901c47c23e3b60c3923896 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.SHS 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.SHS 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.SHS 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3747379 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3747379 ']' 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:15.100 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.362 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:15.362 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:15.362 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3747717 /var/tmp/host.sock 00:16:15.362 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3747717 ']' 00:16:15.362 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:16:15.362 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:15.362 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:15.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:15.362 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:15.362 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.623 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:15.623 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:15.623 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:15.623 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.623 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.623 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.623 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:15.623 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vpR 00:16:15.623 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.623 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.623 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.623 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.vpR 00:16:15.624 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.vpR 00:16:15.886 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.g9Q ]] 00:16:15.886 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.g9Q 00:16:15.886 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.886 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.886 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.886 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.g9Q 00:16:15.886 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.g9Q 00:16:15.886 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:15.886 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.c3I 00:16:15.886 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.886 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.147 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.147 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.c3I 00:16:16.147 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.c3I 00:16:16.147 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.mkg ]] 00:16:16.147 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mkg 00:16:16.147 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.147 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.147 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.147 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mkg 00:16:16.147 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mkg 00:16:16.408 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:16.408 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.LjT 00:16:16.408 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.408 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.408 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.408 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.LjT 00:16:16.408 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.LjT 00:16:16.670 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.kOA ]] 00:16:16.670 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kOA 00:16:16.670 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.670 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.670 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.670 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kOA 00:16:16.670 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kOA 00:16:16.931 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:16.931 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.SHS 00:16:16.931 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.931 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.931 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.931 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.SHS 00:16:16.931 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.SHS 00:16:16.931 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:16.931 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:16.931 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.931 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.931 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:16.931 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:17.192 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:17.192 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.192 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:17.192 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:17.192 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:17.192 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.192 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.192 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.192 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.192 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.192 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.192 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.193 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.454 00:16:17.454 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.454 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.454 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.716 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.716 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.716 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.716 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.716 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.716 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.716 { 00:16:17.716 "cntlid": 1, 00:16:17.716 "qid": 0, 00:16:17.716 "state": "enabled", 00:16:17.716 "thread": "nvmf_tgt_poll_group_000", 00:16:17.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:17.716 "listen_address": { 00:16:17.716 "trtype": "TCP", 00:16:17.716 "adrfam": "IPv4", 00:16:17.716 "traddr": "10.0.0.2", 00:16:17.716 "trsvcid": "4420" 00:16:17.716 }, 00:16:17.716 "peer_address": { 00:16:17.716 "trtype": "TCP", 00:16:17.716 "adrfam": "IPv4", 00:16:17.716 "traddr": "10.0.0.1", 00:16:17.716 "trsvcid": "55186" 00:16:17.716 }, 00:16:17.716 "auth": { 00:16:17.716 "state": "completed", 00:16:17.716 "digest": "sha256", 00:16:17.716 "dhgroup": "null" 00:16:17.716 } 00:16:17.716 } 00:16:17.716 ]' 00:16:17.716 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.716 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.716 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.716 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:17.716 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.978 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.978 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.978 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.978 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:16:17.978 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:16:18.919 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.919 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:18.919 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.919 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.919 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.919 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.919 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:18.919 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:18.919 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:18.919 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.919 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:18.919 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:18.919 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:18.919 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.919 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.919 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.920 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.920 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.920 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.920 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.920 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.181 00:16:19.181 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.181 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.181 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.442 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.442 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.442 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.442 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.442 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.442 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.442 { 00:16:19.442 "cntlid": 3, 00:16:19.442 "qid": 0, 00:16:19.442 "state": "enabled", 00:16:19.442 "thread": "nvmf_tgt_poll_group_000", 00:16:19.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:19.442 "listen_address": { 00:16:19.442 "trtype": "TCP", 00:16:19.442 "adrfam": "IPv4", 00:16:19.442 "traddr": "10.0.0.2", 00:16:19.442 "trsvcid": "4420" 00:16:19.442 }, 00:16:19.442 "peer_address": { 00:16:19.442 "trtype": "TCP", 00:16:19.442 "adrfam": "IPv4", 00:16:19.442 "traddr": "10.0.0.1", 00:16:19.442 "trsvcid": "55214" 00:16:19.442 }, 00:16:19.442 "auth": { 00:16:19.442 "state": "completed", 00:16:19.442 "digest": "sha256", 00:16:19.442 "dhgroup": "null" 00:16:19.442 } 00:16:19.442 } 00:16:19.442 ]' 00:16:19.442 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.442 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.442 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.442 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:19.442 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.442 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.442 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.442 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.703 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:16:19.703 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:16:20.275 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.275 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:20.275 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.275 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.275 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.275 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.275 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:20.275 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:20.536 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:20.536 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.536 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:20.536 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:20.536 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:20.536 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.536 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.536 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.536 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.536 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.536 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.536 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.536 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.797 00:16:20.797 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.797 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.797 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.058 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.058 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.058 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.058 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.058 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.058 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.058 { 00:16:21.058 "cntlid": 5, 00:16:21.058 "qid": 0, 00:16:21.058 "state": "enabled", 00:16:21.058 "thread": "nvmf_tgt_poll_group_000", 00:16:21.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:21.058 "listen_address": { 00:16:21.058 "trtype": "TCP", 00:16:21.058 "adrfam": "IPv4", 00:16:21.058 "traddr": "10.0.0.2", 00:16:21.058 "trsvcid": "4420" 00:16:21.058 }, 00:16:21.058 "peer_address": { 00:16:21.058 "trtype": "TCP", 00:16:21.058 "adrfam": "IPv4", 00:16:21.058 "traddr": "10.0.0.1", 00:16:21.058 "trsvcid": "55234" 00:16:21.058 }, 00:16:21.058 "auth": { 00:16:21.058 "state": "completed", 00:16:21.058 "digest": "sha256", 00:16:21.058 "dhgroup": "null" 00:16:21.058 } 00:16:21.058 } 00:16:21.058 ]' 00:16:21.058 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.058 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.058 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.058 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:21.058 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.058 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.058 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.058 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.319 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:16:21.319 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:16:21.890 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.891 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:21.891 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.891 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.891 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.891 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.891 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:21.891 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:22.151 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:22.151 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.151 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:22.151 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:22.151 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:22.151 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.151 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:22.151 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.151 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.151 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.151 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:22.151 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.151 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.413 00:16:22.413 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.413 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.413 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.674 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.674 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.674 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.674 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.674 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.675 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.675 { 00:16:22.675 "cntlid": 7, 00:16:22.675 "qid": 0, 00:16:22.675 "state": "enabled", 00:16:22.675 "thread": "nvmf_tgt_poll_group_000", 00:16:22.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:22.675 "listen_address": { 00:16:22.675 "trtype": "TCP", 00:16:22.675 "adrfam": "IPv4", 00:16:22.675 "traddr": "10.0.0.2", 00:16:22.675 "trsvcid": "4420" 00:16:22.675 }, 00:16:22.675 "peer_address": { 00:16:22.675 "trtype": "TCP", 00:16:22.675 "adrfam": "IPv4", 00:16:22.675 "traddr": "10.0.0.1", 00:16:22.675 "trsvcid": "55254" 00:16:22.675 }, 00:16:22.675 "auth": { 00:16:22.675 "state": "completed", 00:16:22.675 "digest": "sha256", 00:16:22.675 "dhgroup": "null" 00:16:22.675 } 00:16:22.675 } 00:16:22.675 ]' 00:16:22.675 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.675 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.675 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.675 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:22.675 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.675 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.675 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.675 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.936 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:16:22.936 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:16:23.508 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.508 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:23.508 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.508 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.508 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.508 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.508 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.508 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:23.508 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:23.768 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:23.768 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.768 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.768 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:23.768 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:23.768 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.768 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.768 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.768 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.768 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.768 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.768 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.768 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.028 00:16:24.028 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.028 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.029 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.289 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.289 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.289 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.289 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.289 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.289 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.289 { 00:16:24.289 "cntlid": 9, 00:16:24.289 "qid": 0, 00:16:24.289 "state": "enabled", 00:16:24.289 "thread": "nvmf_tgt_poll_group_000", 00:16:24.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:24.289 "listen_address": { 00:16:24.289 "trtype": "TCP", 00:16:24.289 "adrfam": "IPv4", 00:16:24.289 "traddr": "10.0.0.2", 00:16:24.289 "trsvcid": "4420" 00:16:24.289 }, 00:16:24.289 "peer_address": { 00:16:24.289 "trtype": "TCP", 00:16:24.289 "adrfam": "IPv4", 00:16:24.289 "traddr": "10.0.0.1", 00:16:24.289 "trsvcid": "55284" 00:16:24.289 }, 00:16:24.289 "auth": { 00:16:24.289 "state": "completed", 00:16:24.289 "digest": "sha256", 00:16:24.289 "dhgroup": "ffdhe2048" 00:16:24.289 } 00:16:24.289 } 00:16:24.289 ]' 00:16:24.289 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.289 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.289 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.289 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:24.289 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.289 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.290 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.290 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.551 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:16:24.551 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:16:25.123 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.123 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:25.123 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.123 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.123 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.123 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.123 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:25.123 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:25.384 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:25.384 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.384 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.384 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:25.384 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:25.384 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.384 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.384 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.384 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.384 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.384 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.384 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.384 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.644 00:16:25.644 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.644 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.644 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.904 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.905 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.905 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.905 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.905 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.905 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.905 { 00:16:25.905 "cntlid": 11, 00:16:25.905 "qid": 0, 00:16:25.905 "state": "enabled", 00:16:25.905 "thread": "nvmf_tgt_poll_group_000", 00:16:25.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:25.905 "listen_address": { 00:16:25.905 "trtype": "TCP", 00:16:25.905 "adrfam": "IPv4", 00:16:25.905 "traddr": "10.0.0.2", 00:16:25.905 "trsvcid": "4420" 00:16:25.905 }, 00:16:25.905 "peer_address": { 00:16:25.905 "trtype": "TCP", 00:16:25.905 "adrfam": "IPv4", 00:16:25.905 "traddr": "10.0.0.1", 00:16:25.905 "trsvcid": "33722" 00:16:25.905 }, 00:16:25.905 "auth": { 00:16:25.905 "state": "completed", 00:16:25.905 "digest": "sha256", 00:16:25.905 "dhgroup": "ffdhe2048" 00:16:25.905 } 00:16:25.905 } 00:16:25.905 ]' 00:16:25.905 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.905 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.905 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.905 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:25.905 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.905 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.905 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.905 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.166 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:16:26.166 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:16:26.737 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.737 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:26.737 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.737 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.737 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.737 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.737 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:26.737 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:26.998 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:26.998 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.998 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.998 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:26.998 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:26.998 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.998 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.998 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.998 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.998 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.998 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.998 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.998 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.259 00:16:27.259 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.259 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.259 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.519 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.519 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.519 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.519 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.519 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.519 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.519 { 00:16:27.519 "cntlid": 13, 00:16:27.519 "qid": 0, 00:16:27.519 "state": "enabled", 00:16:27.519 "thread": "nvmf_tgt_poll_group_000", 00:16:27.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:27.519 "listen_address": { 00:16:27.519 "trtype": "TCP", 00:16:27.519 "adrfam": "IPv4", 00:16:27.519 "traddr": "10.0.0.2", 00:16:27.519 "trsvcid": "4420" 00:16:27.519 }, 00:16:27.519 "peer_address": { 00:16:27.519 "trtype": "TCP", 00:16:27.519 "adrfam": "IPv4", 00:16:27.519 "traddr": "10.0.0.1", 00:16:27.519 "trsvcid": "33746" 00:16:27.519 }, 00:16:27.519 "auth": { 00:16:27.519 "state": "completed", 00:16:27.519 "digest": "sha256", 00:16:27.519 "dhgroup": "ffdhe2048" 00:16:27.519 } 00:16:27.519 } 00:16:27.519 ]' 00:16:27.519 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.519 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.519 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.519 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:27.519 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.519 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.519 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.519 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.780 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:16:27.780 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:16:28.351 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.351 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:28.351 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.351 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.351 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.351 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.351 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:28.351 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:28.611 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:28.611 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.611 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:28.611 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:28.611 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:28.611 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.611 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:28.611 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.611 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.611 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.611 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:28.611 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.611 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.871 00:16:28.871 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.871 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.871 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.132 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.132 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.132 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.132 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.132 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.132 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.132 { 00:16:29.132 "cntlid": 15, 00:16:29.132 "qid": 0, 00:16:29.132 "state": "enabled", 00:16:29.132 "thread": "nvmf_tgt_poll_group_000", 00:16:29.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:29.132 "listen_address": { 00:16:29.132 "trtype": "TCP", 00:16:29.132 "adrfam": "IPv4", 00:16:29.132 "traddr": "10.0.0.2", 00:16:29.132 "trsvcid": "4420" 00:16:29.132 }, 00:16:29.132 "peer_address": { 00:16:29.132 "trtype": "TCP", 00:16:29.132 "adrfam": "IPv4", 00:16:29.132 "traddr": "10.0.0.1", 00:16:29.132 "trsvcid": "33768" 00:16:29.132 }, 00:16:29.132 "auth": { 00:16:29.132 "state": "completed", 00:16:29.132 "digest": "sha256", 00:16:29.132 "dhgroup": "ffdhe2048" 00:16:29.132 } 00:16:29.132 } 00:16:29.132 ]' 00:16:29.132 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.132 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.132 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.132 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.132 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.132 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.132 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.132 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.392 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:16:29.392 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:16:29.964 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.964 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:29.964 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.964 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.964 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.964 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.964 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.964 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:29.964 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:30.225 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:30.225 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.225 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:30.225 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:30.225 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:30.225 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.225 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.225 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.225 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.225 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.225 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.225 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.225 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.486 00:16:30.486 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.486 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.486 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.747 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.747 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.747 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.747 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.747 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.747 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.747 { 00:16:30.747 "cntlid": 17, 00:16:30.747 "qid": 0, 00:16:30.747 "state": "enabled", 00:16:30.747 "thread": "nvmf_tgt_poll_group_000", 00:16:30.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:30.747 "listen_address": { 00:16:30.747 "trtype": "TCP", 00:16:30.747 "adrfam": "IPv4", 00:16:30.747 "traddr": "10.0.0.2", 00:16:30.747 "trsvcid": "4420" 00:16:30.747 }, 00:16:30.747 "peer_address": { 00:16:30.747 "trtype": "TCP", 00:16:30.747 "adrfam": "IPv4", 00:16:30.747 "traddr": "10.0.0.1", 00:16:30.747 "trsvcid": "33806" 00:16:30.747 }, 00:16:30.747 "auth": { 00:16:30.747 "state": "completed", 00:16:30.747 "digest": "sha256", 00:16:30.747 "dhgroup": "ffdhe3072" 00:16:30.747 } 00:16:30.747 } 00:16:30.747 ]' 00:16:30.747 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.747 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.747 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.747 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:30.747 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.747 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.747 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.747 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.008 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:16:31.008 15:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:16:31.580 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.580 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:31.580 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.580 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.580 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.580 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.580 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:31.580 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:31.841 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:31.841 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.841 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.841 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:31.841 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:31.841 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.841 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.841 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.841 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.841 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.841 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.841 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.841 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.102 00:16:32.102 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.102 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.102 15:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.363 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.363 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.363 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.363 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.363 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.363 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.363 { 00:16:32.363 "cntlid": 19, 00:16:32.363 "qid": 0, 00:16:32.363 "state": "enabled", 00:16:32.363 "thread": "nvmf_tgt_poll_group_000", 00:16:32.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:32.363 "listen_address": { 00:16:32.363 "trtype": "TCP", 00:16:32.363 "adrfam": "IPv4", 00:16:32.363 "traddr": "10.0.0.2", 00:16:32.363 "trsvcid": "4420" 00:16:32.363 }, 00:16:32.363 "peer_address": { 00:16:32.363 "trtype": "TCP", 00:16:32.363 "adrfam": "IPv4", 00:16:32.363 "traddr": "10.0.0.1", 00:16:32.363 "trsvcid": "33822" 00:16:32.363 }, 00:16:32.363 "auth": { 00:16:32.363 "state": "completed", 00:16:32.363 "digest": "sha256", 00:16:32.363 "dhgroup": "ffdhe3072" 00:16:32.363 } 00:16:32.363 } 00:16:32.363 ]' 00:16:32.363 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.363 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.363 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.363 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:32.363 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.363 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.363 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.363 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.624 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:16:32.624 15:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:16:33.196 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.196 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:33.196 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.196 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.196 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.196 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.196 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:33.196 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:33.457 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:33.457 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.457 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:33.457 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:33.457 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:33.457 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.457 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.457 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.457 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.457 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.457 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.457 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.457 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.718 00:16:33.718 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.718 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.718 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.979 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.979 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.979 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.979 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.979 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.979 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.979 { 00:16:33.979 "cntlid": 21, 00:16:33.979 "qid": 0, 00:16:33.979 "state": "enabled", 00:16:33.979 "thread": "nvmf_tgt_poll_group_000", 00:16:33.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:33.979 "listen_address": { 00:16:33.979 "trtype": "TCP", 00:16:33.979 "adrfam": "IPv4", 00:16:33.979 "traddr": "10.0.0.2", 00:16:33.979 "trsvcid": "4420" 00:16:33.979 }, 00:16:33.979 "peer_address": { 00:16:33.979 "trtype": "TCP", 00:16:33.979 "adrfam": "IPv4", 00:16:33.979 "traddr": "10.0.0.1", 00:16:33.979 "trsvcid": "33854" 00:16:33.979 }, 00:16:33.979 "auth": { 00:16:33.979 "state": "completed", 00:16:33.979 "digest": "sha256", 00:16:33.979 "dhgroup": "ffdhe3072" 00:16:33.979 } 00:16:33.979 } 00:16:33.979 ]' 00:16:33.979 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.979 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.979 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.979 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:33.979 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.979 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.979 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.979 15:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.239 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:16:34.239 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:16:34.810 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.810 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:34.810 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.810 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.810 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.810 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.810 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:34.810 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.071 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:35.071 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.071 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.071 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:35.071 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:35.071 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.071 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:35.071 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.071 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.071 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.071 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:35.071 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.071 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.331 00:16:35.331 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.331 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.331 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.592 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.592 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.592 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.592 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.592 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.592 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.592 { 00:16:35.592 "cntlid": 23, 00:16:35.592 "qid": 0, 00:16:35.592 "state": "enabled", 00:16:35.592 "thread": "nvmf_tgt_poll_group_000", 00:16:35.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:35.592 "listen_address": { 00:16:35.592 "trtype": "TCP", 00:16:35.592 "adrfam": "IPv4", 00:16:35.592 "traddr": "10.0.0.2", 00:16:35.592 "trsvcid": "4420" 00:16:35.592 }, 00:16:35.592 "peer_address": { 00:16:35.593 "trtype": "TCP", 00:16:35.593 "adrfam": "IPv4", 00:16:35.593 "traddr": "10.0.0.1", 00:16:35.593 "trsvcid": "33404" 00:16:35.593 }, 00:16:35.593 "auth": { 00:16:35.593 "state": "completed", 00:16:35.593 "digest": "sha256", 00:16:35.593 "dhgroup": "ffdhe3072" 00:16:35.593 } 00:16:35.593 } 00:16:35.593 ]' 00:16:35.593 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.593 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.593 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.593 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:35.593 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.593 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.593 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.593 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.854 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:16:35.854 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:16:36.425 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.425 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:36.425 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.425 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.425 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.425 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.425 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.425 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:36.425 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:36.686 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:36.686 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.686 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.686 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:36.686 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:36.686 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.686 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.686 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.686 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.686 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.686 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.686 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.686 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.947 00:16:36.947 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.947 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.947 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.208 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.208 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.208 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.208 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.208 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.208 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.208 { 00:16:37.208 "cntlid": 25, 00:16:37.208 "qid": 0, 00:16:37.208 "state": "enabled", 00:16:37.208 "thread": "nvmf_tgt_poll_group_000", 00:16:37.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:37.208 "listen_address": { 00:16:37.208 "trtype": "TCP", 00:16:37.208 "adrfam": "IPv4", 00:16:37.208 "traddr": "10.0.0.2", 00:16:37.208 "trsvcid": "4420" 00:16:37.208 }, 00:16:37.208 "peer_address": { 00:16:37.208 "trtype": "TCP", 00:16:37.208 "adrfam": "IPv4", 00:16:37.208 "traddr": "10.0.0.1", 00:16:37.208 "trsvcid": "33424" 00:16:37.208 }, 00:16:37.208 "auth": { 00:16:37.208 "state": "completed", 00:16:37.208 "digest": "sha256", 00:16:37.208 "dhgroup": "ffdhe4096" 00:16:37.208 } 00:16:37.208 } 00:16:37.208 ]' 00:16:37.208 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.208 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.208 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.208 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:37.208 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.208 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.208 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.208 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.468 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:16:37.468 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:16:38.411 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.411 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:38.411 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.411 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.411 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.411 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.411 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:38.411 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:38.411 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:38.411 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.411 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.411 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:38.411 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:38.411 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.411 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.411 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.411 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.411 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.411 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.411 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.411 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.672 00:16:38.672 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.672 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.672 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.933 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.933 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.933 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.933 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.933 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.933 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.933 { 00:16:38.933 "cntlid": 27, 00:16:38.933 "qid": 0, 00:16:38.933 "state": "enabled", 00:16:38.933 "thread": "nvmf_tgt_poll_group_000", 00:16:38.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:38.933 "listen_address": { 00:16:38.933 "trtype": "TCP", 00:16:38.933 "adrfam": "IPv4", 00:16:38.933 "traddr": "10.0.0.2", 00:16:38.933 "trsvcid": "4420" 00:16:38.933 }, 00:16:38.934 "peer_address": { 00:16:38.934 "trtype": "TCP", 00:16:38.934 "adrfam": "IPv4", 00:16:38.934 "traddr": "10.0.0.1", 00:16:38.934 "trsvcid": "33462" 00:16:38.934 }, 00:16:38.934 "auth": { 00:16:38.934 "state": "completed", 00:16:38.934 "digest": "sha256", 00:16:38.934 "dhgroup": "ffdhe4096" 00:16:38.934 } 00:16:38.934 } 00:16:38.934 ]' 00:16:38.934 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.934 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.934 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.934 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:38.934 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.934 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.934 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.934 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.194 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:16:39.194 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:16:39.765 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.765 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:39.765 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.765 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.765 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.765 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.765 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:39.765 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:40.026 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:40.026 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.026 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.026 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:40.026 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:40.026 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.026 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.026 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.026 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.026 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.026 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.026 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.026 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.286 00:16:40.286 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.286 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.286 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.547 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.547 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.547 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.547 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.547 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.547 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.547 { 00:16:40.547 "cntlid": 29, 00:16:40.547 "qid": 0, 00:16:40.547 "state": "enabled", 00:16:40.547 "thread": "nvmf_tgt_poll_group_000", 00:16:40.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:40.547 "listen_address": { 00:16:40.547 "trtype": "TCP", 00:16:40.547 "adrfam": "IPv4", 00:16:40.547 "traddr": "10.0.0.2", 00:16:40.547 "trsvcid": "4420" 00:16:40.547 }, 00:16:40.547 "peer_address": { 00:16:40.547 "trtype": "TCP", 00:16:40.547 "adrfam": "IPv4", 00:16:40.547 "traddr": "10.0.0.1", 00:16:40.547 "trsvcid": "33496" 00:16:40.547 }, 00:16:40.547 "auth": { 00:16:40.547 "state": "completed", 00:16:40.547 "digest": "sha256", 00:16:40.547 "dhgroup": "ffdhe4096" 00:16:40.547 } 00:16:40.547 } 00:16:40.547 ]' 00:16:40.547 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.547 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.547 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.547 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:40.547 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.547 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.547 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.547 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.808 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:16:40.808 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:16:41.380 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.380 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:41.380 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.380 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.380 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.380 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.380 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:41.380 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:41.641 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:41.641 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.641 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.641 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:41.641 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:41.641 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.641 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:41.641 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.641 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.641 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.641 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:41.641 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.641 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.901 00:16:41.901 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.901 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.901 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.162 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.162 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.162 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.162 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.162 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.162 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.162 { 00:16:42.162 "cntlid": 31, 00:16:42.162 "qid": 0, 00:16:42.162 "state": "enabled", 00:16:42.162 "thread": "nvmf_tgt_poll_group_000", 00:16:42.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:42.162 "listen_address": { 00:16:42.162 "trtype": "TCP", 00:16:42.162 "adrfam": "IPv4", 00:16:42.162 "traddr": "10.0.0.2", 00:16:42.162 "trsvcid": "4420" 00:16:42.162 }, 00:16:42.162 "peer_address": { 00:16:42.162 "trtype": "TCP", 00:16:42.162 "adrfam": "IPv4", 00:16:42.162 "traddr": "10.0.0.1", 00:16:42.162 "trsvcid": "33516" 00:16:42.162 }, 00:16:42.162 "auth": { 00:16:42.162 "state": "completed", 00:16:42.162 "digest": "sha256", 00:16:42.162 "dhgroup": "ffdhe4096" 00:16:42.162 } 00:16:42.162 } 00:16:42.162 ]' 00:16:42.162 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.162 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.162 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.162 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.162 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.162 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.162 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.162 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.423 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:16:42.423 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.379 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.692 00:16:43.692 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.692 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.692 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.019 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.019 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.019 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.019 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.019 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.019 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.019 { 00:16:44.019 "cntlid": 33, 00:16:44.019 "qid": 0, 00:16:44.019 "state": "enabled", 00:16:44.019 "thread": "nvmf_tgt_poll_group_000", 00:16:44.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:44.019 "listen_address": { 00:16:44.019 "trtype": "TCP", 00:16:44.019 "adrfam": "IPv4", 00:16:44.019 "traddr": "10.0.0.2", 00:16:44.019 "trsvcid": "4420" 00:16:44.019 }, 00:16:44.019 "peer_address": { 00:16:44.019 "trtype": "TCP", 00:16:44.019 "adrfam": "IPv4", 00:16:44.019 "traddr": "10.0.0.1", 00:16:44.019 "trsvcid": "33542" 00:16:44.019 }, 00:16:44.019 "auth": { 00:16:44.019 "state": "completed", 00:16:44.019 "digest": "sha256", 00:16:44.019 "dhgroup": "ffdhe6144" 00:16:44.019 } 00:16:44.019 } 00:16:44.019 ]' 00:16:44.019 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.019 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.019 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.019 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:44.019 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.019 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.019 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.019 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.299 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:16:44.299 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:16:44.871 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.871 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:44.871 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.871 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.871 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.871 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.871 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:44.871 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:45.132 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:45.132 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.132 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.132 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:45.132 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:45.132 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.132 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.132 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.132 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.132 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.132 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.132 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.132 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.393 00:16:45.393 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.393 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.393 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.654 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.654 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.654 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.654 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.654 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.654 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.654 { 00:16:45.654 "cntlid": 35, 00:16:45.654 "qid": 0, 00:16:45.654 "state": "enabled", 00:16:45.654 "thread": "nvmf_tgt_poll_group_000", 00:16:45.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:45.654 "listen_address": { 00:16:45.654 "trtype": "TCP", 00:16:45.654 "adrfam": "IPv4", 00:16:45.654 "traddr": "10.0.0.2", 00:16:45.654 "trsvcid": "4420" 00:16:45.654 }, 00:16:45.654 "peer_address": { 00:16:45.654 "trtype": "TCP", 00:16:45.654 "adrfam": "IPv4", 00:16:45.654 "traddr": "10.0.0.1", 00:16:45.654 "trsvcid": "39946" 00:16:45.654 }, 00:16:45.654 "auth": { 00:16:45.654 "state": "completed", 00:16:45.654 "digest": "sha256", 00:16:45.654 "dhgroup": "ffdhe6144" 00:16:45.654 } 00:16:45.654 } 00:16:45.654 ]' 00:16:45.654 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.654 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.654 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.654 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:45.654 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.915 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.915 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.915 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.915 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:16:45.915 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:16:46.857 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.857 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:46.857 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.857 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.857 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.857 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.857 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:46.857 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:46.857 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:46.857 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.857 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.857 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:46.857 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:46.857 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.857 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.857 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.857 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.857 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.857 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.857 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.857 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.119 00:16:47.119 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.119 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.119 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.379 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.379 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.379 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.379 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.379 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.379 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.379 { 00:16:47.379 "cntlid": 37, 00:16:47.379 "qid": 0, 00:16:47.379 "state": "enabled", 00:16:47.379 "thread": "nvmf_tgt_poll_group_000", 00:16:47.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:47.379 "listen_address": { 00:16:47.379 "trtype": "TCP", 00:16:47.379 "adrfam": "IPv4", 00:16:47.379 "traddr": "10.0.0.2", 00:16:47.379 "trsvcid": "4420" 00:16:47.379 }, 00:16:47.379 "peer_address": { 00:16:47.379 "trtype": "TCP", 00:16:47.379 "adrfam": "IPv4", 00:16:47.379 "traddr": "10.0.0.1", 00:16:47.379 "trsvcid": "39972" 00:16:47.379 }, 00:16:47.379 "auth": { 00:16:47.379 "state": "completed", 00:16:47.379 "digest": "sha256", 00:16:47.379 "dhgroup": "ffdhe6144" 00:16:47.379 } 00:16:47.379 } 00:16:47.379 ]' 00:16:47.379 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.379 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.379 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.379 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:47.379 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.640 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.640 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.640 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.640 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:16:47.640 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:16:48.582 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.582 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:48.582 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.582 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.582 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.582 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.582 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.582 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.582 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:48.582 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.582 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.582 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:48.582 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:48.582 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.582 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:48.582 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.582 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.582 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.582 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:48.582 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.582 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.843 00:16:48.843 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.843 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.843 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.105 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.105 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.105 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.105 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.105 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.105 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.105 { 00:16:49.105 "cntlid": 39, 00:16:49.105 "qid": 0, 00:16:49.105 "state": "enabled", 00:16:49.105 "thread": "nvmf_tgt_poll_group_000", 00:16:49.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:49.105 "listen_address": { 00:16:49.105 "trtype": "TCP", 00:16:49.105 "adrfam": "IPv4", 00:16:49.105 "traddr": "10.0.0.2", 00:16:49.105 "trsvcid": "4420" 00:16:49.105 }, 00:16:49.105 "peer_address": { 00:16:49.105 "trtype": "TCP", 00:16:49.105 "adrfam": "IPv4", 00:16:49.105 "traddr": "10.0.0.1", 00:16:49.105 "trsvcid": "40010" 00:16:49.105 }, 00:16:49.105 "auth": { 00:16:49.105 "state": "completed", 00:16:49.105 "digest": "sha256", 00:16:49.105 "dhgroup": "ffdhe6144" 00:16:49.105 } 00:16:49.105 } 00:16:49.105 ]' 00:16:49.105 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.105 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.105 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.105 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.105 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.105 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.105 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.105 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.366 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:16:49.366 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:16:49.938 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.938 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:49.938 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.938 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.938 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.938 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.938 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.938 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:49.938 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:50.199 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:50.199 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.199 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.199 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:50.199 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:50.199 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.199 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.199 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.199 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.199 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.199 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.199 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.199 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.772 00:16:50.772 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.772 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.772 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.032 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.032 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.032 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.032 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.033 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.033 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.033 { 00:16:51.033 "cntlid": 41, 00:16:51.033 "qid": 0, 00:16:51.033 "state": "enabled", 00:16:51.033 "thread": "nvmf_tgt_poll_group_000", 00:16:51.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:51.033 "listen_address": { 00:16:51.033 "trtype": "TCP", 00:16:51.033 "adrfam": "IPv4", 00:16:51.033 "traddr": "10.0.0.2", 00:16:51.033 "trsvcid": "4420" 00:16:51.033 }, 00:16:51.033 "peer_address": { 00:16:51.033 "trtype": "TCP", 00:16:51.033 "adrfam": "IPv4", 00:16:51.033 "traddr": "10.0.0.1", 00:16:51.033 "trsvcid": "40030" 00:16:51.033 }, 00:16:51.033 "auth": { 00:16:51.033 "state": "completed", 00:16:51.033 "digest": "sha256", 00:16:51.033 "dhgroup": "ffdhe8192" 00:16:51.033 } 00:16:51.033 } 00:16:51.033 ]' 00:16:51.033 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.033 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.033 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.033 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.033 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.033 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.033 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.033 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.293 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:16:51.293 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:16:51.865 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.865 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:51.865 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.865 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.866 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.866 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.866 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:51.866 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:52.125 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:52.125 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.125 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.125 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:52.125 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:52.125 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.125 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.126 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.126 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.126 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.126 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.126 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.126 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.696 00:16:52.696 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.696 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.696 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.696 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.696 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.696 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.696 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.696 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.696 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.696 { 00:16:52.696 "cntlid": 43, 00:16:52.696 "qid": 0, 00:16:52.696 "state": "enabled", 00:16:52.696 "thread": "nvmf_tgt_poll_group_000", 00:16:52.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:52.696 "listen_address": { 00:16:52.696 "trtype": "TCP", 00:16:52.696 "adrfam": "IPv4", 00:16:52.697 "traddr": "10.0.0.2", 00:16:52.697 "trsvcid": "4420" 00:16:52.697 }, 00:16:52.697 "peer_address": { 00:16:52.697 "trtype": "TCP", 00:16:52.697 "adrfam": "IPv4", 00:16:52.697 "traddr": "10.0.0.1", 00:16:52.697 "trsvcid": "40050" 00:16:52.697 }, 00:16:52.697 "auth": { 00:16:52.697 "state": "completed", 00:16:52.697 "digest": "sha256", 00:16:52.697 "dhgroup": "ffdhe8192" 00:16:52.697 } 00:16:52.697 } 00:16:52.697 ]' 00:16:52.697 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.957 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.957 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.957 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:52.957 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.957 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.957 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.957 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.219 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:16:53.219 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:16:53.790 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.790 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:53.790 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.790 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.790 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.790 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.790 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:53.790 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:54.051 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:54.051 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.051 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.051 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:54.051 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:54.051 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.051 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.051 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.051 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.051 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.051 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.051 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.051 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.311 00:16:54.311 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.311 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.311 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.573 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.573 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.573 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.573 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.573 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.573 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.573 { 00:16:54.573 "cntlid": 45, 00:16:54.573 "qid": 0, 00:16:54.573 "state": "enabled", 00:16:54.573 "thread": "nvmf_tgt_poll_group_000", 00:16:54.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:54.573 "listen_address": { 00:16:54.573 "trtype": "TCP", 00:16:54.573 "adrfam": "IPv4", 00:16:54.573 "traddr": "10.0.0.2", 00:16:54.573 "trsvcid": "4420" 00:16:54.573 }, 00:16:54.573 "peer_address": { 00:16:54.573 "trtype": "TCP", 00:16:54.573 "adrfam": "IPv4", 00:16:54.573 "traddr": "10.0.0.1", 00:16:54.573 "trsvcid": "40072" 00:16:54.573 }, 00:16:54.573 "auth": { 00:16:54.573 "state": "completed", 00:16:54.573 "digest": "sha256", 00:16:54.573 "dhgroup": "ffdhe8192" 00:16:54.573 } 00:16:54.573 } 00:16:54.573 ]' 00:16:54.573 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.573 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.573 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.833 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.833 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.833 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.833 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.833 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.833 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:16:54.833 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:16:55.775 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.775 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:55.775 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.775 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.775 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.775 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.775 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:55.775 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:55.775 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:55.775 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.775 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:55.775 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:55.776 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:55.776 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.776 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:55.776 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.776 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.776 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.776 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:55.776 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.776 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.348 00:16:56.348 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.348 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.348 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.348 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.348 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.348 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.348 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.348 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.348 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.348 { 00:16:56.348 "cntlid": 47, 00:16:56.348 "qid": 0, 00:16:56.348 "state": "enabled", 00:16:56.348 "thread": "nvmf_tgt_poll_group_000", 00:16:56.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:56.348 "listen_address": { 00:16:56.348 "trtype": "TCP", 00:16:56.348 "adrfam": "IPv4", 00:16:56.348 "traddr": "10.0.0.2", 00:16:56.348 "trsvcid": "4420" 00:16:56.348 }, 00:16:56.348 "peer_address": { 00:16:56.348 "trtype": "TCP", 00:16:56.348 "adrfam": "IPv4", 00:16:56.348 "traddr": "10.0.0.1", 00:16:56.348 "trsvcid": "56552" 00:16:56.348 }, 00:16:56.348 "auth": { 00:16:56.348 "state": "completed", 00:16:56.348 "digest": "sha256", 00:16:56.348 "dhgroup": "ffdhe8192" 00:16:56.348 } 00:16:56.348 } 00:16:56.348 ]' 00:16:56.348 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.608 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.608 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.608 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.608 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.608 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.608 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.608 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.869 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:16:56.869 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:16:57.440 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.440 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:57.440 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.440 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.440 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.440 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:57.440 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.440 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.440 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:57.440 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:57.701 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:57.701 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.701 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:57.701 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:57.701 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:57.701 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.701 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.701 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.701 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.701 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.701 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.701 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.701 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.961 00:16:57.961 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.961 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.962 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.222 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.222 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.222 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.222 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.222 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.222 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.222 { 00:16:58.222 "cntlid": 49, 00:16:58.222 "qid": 0, 00:16:58.222 "state": "enabled", 00:16:58.222 "thread": "nvmf_tgt_poll_group_000", 00:16:58.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:58.222 "listen_address": { 00:16:58.222 "trtype": "TCP", 00:16:58.222 "adrfam": "IPv4", 00:16:58.222 "traddr": "10.0.0.2", 00:16:58.222 "trsvcid": "4420" 00:16:58.222 }, 00:16:58.222 "peer_address": { 00:16:58.222 "trtype": "TCP", 00:16:58.222 "adrfam": "IPv4", 00:16:58.222 "traddr": "10.0.0.1", 00:16:58.222 "trsvcid": "56576" 00:16:58.222 }, 00:16:58.222 "auth": { 00:16:58.222 "state": "completed", 00:16:58.222 "digest": "sha384", 00:16:58.222 "dhgroup": "null" 00:16:58.222 } 00:16:58.222 } 00:16:58.222 ]' 00:16:58.222 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.222 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.222 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.222 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:58.222 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.222 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.222 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.222 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.484 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:16:58.484 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:16:59.055 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.055 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:59.055 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.055 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.055 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.055 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.055 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:59.055 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:59.315 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:59.315 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.315 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:59.315 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:59.315 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:59.315 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.315 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.315 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.315 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.315 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.315 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.315 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.315 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.577 00:16:59.577 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.577 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.577 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.837 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.837 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.837 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.837 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.837 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.837 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.837 { 00:16:59.837 "cntlid": 51, 00:16:59.837 "qid": 0, 00:16:59.837 "state": "enabled", 00:16:59.837 "thread": "nvmf_tgt_poll_group_000", 00:16:59.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:59.837 "listen_address": { 00:16:59.837 "trtype": "TCP", 00:16:59.837 "adrfam": "IPv4", 00:16:59.837 "traddr": "10.0.0.2", 00:16:59.837 "trsvcid": "4420" 00:16:59.837 }, 00:16:59.837 "peer_address": { 00:16:59.837 "trtype": "TCP", 00:16:59.837 "adrfam": "IPv4", 00:16:59.837 "traddr": "10.0.0.1", 00:16:59.837 "trsvcid": "56596" 00:16:59.837 }, 00:16:59.837 "auth": { 00:16:59.837 "state": "completed", 00:16:59.837 "digest": "sha384", 00:16:59.837 "dhgroup": "null" 00:16:59.837 } 00:16:59.837 } 00:16:59.837 ]' 00:16:59.837 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.837 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.838 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.838 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:59.838 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.838 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.838 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.838 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.098 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:17:00.098 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:17:00.668 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.668 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:00.668 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.668 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.668 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.668 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.668 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:00.668 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:00.928 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:00.928 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.928 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.928 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:00.928 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:00.928 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.928 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.928 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.928 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.928 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.928 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.928 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.928 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.189 00:17:01.189 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.189 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.189 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.189 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.189 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.189 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.189 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.449 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.449 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.449 { 00:17:01.449 "cntlid": 53, 00:17:01.449 "qid": 0, 00:17:01.449 "state": "enabled", 00:17:01.449 "thread": "nvmf_tgt_poll_group_000", 00:17:01.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:01.449 "listen_address": { 00:17:01.449 "trtype": "TCP", 00:17:01.449 "adrfam": "IPv4", 00:17:01.449 "traddr": "10.0.0.2", 00:17:01.449 "trsvcid": "4420" 00:17:01.449 }, 00:17:01.449 "peer_address": { 00:17:01.449 "trtype": "TCP", 00:17:01.449 "adrfam": "IPv4", 00:17:01.449 "traddr": "10.0.0.1", 00:17:01.449 "trsvcid": "56630" 00:17:01.449 }, 00:17:01.449 "auth": { 00:17:01.449 "state": "completed", 00:17:01.449 "digest": "sha384", 00:17:01.449 "dhgroup": "null" 00:17:01.449 } 00:17:01.449 } 00:17:01.449 ]' 00:17:01.449 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.449 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.449 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.449 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:01.449 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.449 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.449 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.449 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.710 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:17:01.710 15:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:17:02.281 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.281 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:02.281 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.281 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.281 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.281 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.281 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:02.281 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:02.542 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:02.542 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.542 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.542 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:02.542 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:02.542 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.542 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:02.542 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.542 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.542 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.542 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:02.542 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.542 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.803 00:17:02.803 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.803 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.803 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.803 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.803 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.803 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.803 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.064 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.064 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.064 { 00:17:03.064 "cntlid": 55, 00:17:03.064 "qid": 0, 00:17:03.064 "state": "enabled", 00:17:03.064 "thread": "nvmf_tgt_poll_group_000", 00:17:03.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:03.064 "listen_address": { 00:17:03.064 "trtype": "TCP", 00:17:03.064 "adrfam": "IPv4", 00:17:03.064 "traddr": "10.0.0.2", 00:17:03.064 "trsvcid": "4420" 00:17:03.064 }, 00:17:03.064 "peer_address": { 00:17:03.064 "trtype": "TCP", 00:17:03.064 "adrfam": "IPv4", 00:17:03.064 "traddr": "10.0.0.1", 00:17:03.064 "trsvcid": "56658" 00:17:03.064 }, 00:17:03.064 "auth": { 00:17:03.064 "state": "completed", 00:17:03.064 "digest": "sha384", 00:17:03.064 "dhgroup": "null" 00:17:03.064 } 00:17:03.064 } 00:17:03.064 ]' 00:17:03.064 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.064 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.064 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.064 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:03.064 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.064 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.064 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.064 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.325 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:17:03.325 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:17:03.896 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.896 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:03.896 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.896 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.896 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.896 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.896 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.896 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:03.896 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:04.156 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:04.156 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.156 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:04.156 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:04.156 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:04.156 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.156 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.156 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.156 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.156 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.156 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.156 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.156 15:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.417 00:17:04.417 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.417 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.418 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.418 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.418 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.418 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.418 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.418 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.418 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.418 { 00:17:04.418 "cntlid": 57, 00:17:04.418 "qid": 0, 00:17:04.418 "state": "enabled", 00:17:04.418 "thread": "nvmf_tgt_poll_group_000", 00:17:04.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:04.418 "listen_address": { 00:17:04.418 "trtype": "TCP", 00:17:04.418 "adrfam": "IPv4", 00:17:04.418 "traddr": "10.0.0.2", 00:17:04.418 "trsvcid": "4420" 00:17:04.418 }, 00:17:04.418 "peer_address": { 00:17:04.418 "trtype": "TCP", 00:17:04.418 "adrfam": "IPv4", 00:17:04.418 "traddr": "10.0.0.1", 00:17:04.418 "trsvcid": "56678" 00:17:04.418 }, 00:17:04.418 "auth": { 00:17:04.418 "state": "completed", 00:17:04.418 "digest": "sha384", 00:17:04.418 "dhgroup": "ffdhe2048" 00:17:04.418 } 00:17:04.418 } 00:17:04.418 ]' 00:17:04.418 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.679 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.679 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.679 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:04.679 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.679 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.679 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.679 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.940 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:17:04.940 15:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:17:05.511 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.511 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:05.511 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.511 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.511 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.511 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.511 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:05.511 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:05.772 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:05.772 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.772 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.772 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:05.772 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:05.772 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.772 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.772 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.772 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.772 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.772 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.772 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.772 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.033 00:17:06.033 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.033 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.033 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.033 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.033 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.033 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.033 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.033 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.033 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.033 { 00:17:06.033 "cntlid": 59, 00:17:06.033 "qid": 0, 00:17:06.033 "state": "enabled", 00:17:06.033 "thread": "nvmf_tgt_poll_group_000", 00:17:06.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:06.033 "listen_address": { 00:17:06.033 "trtype": "TCP", 00:17:06.033 "adrfam": "IPv4", 00:17:06.033 "traddr": "10.0.0.2", 00:17:06.033 "trsvcid": "4420" 00:17:06.033 }, 00:17:06.033 "peer_address": { 00:17:06.033 "trtype": "TCP", 00:17:06.033 "adrfam": "IPv4", 00:17:06.033 "traddr": "10.0.0.1", 00:17:06.033 "trsvcid": "48956" 00:17:06.033 }, 00:17:06.033 "auth": { 00:17:06.033 "state": "completed", 00:17:06.033 "digest": "sha384", 00:17:06.033 "dhgroup": "ffdhe2048" 00:17:06.033 } 00:17:06.033 } 00:17:06.033 ]' 00:17:06.033 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.293 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.293 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.293 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:06.293 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.293 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.293 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.293 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.554 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:17:06.554 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:17:07.125 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.125 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:07.125 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.125 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.125 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.125 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.125 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.125 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.386 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:07.386 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.386 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.386 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:07.386 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:07.386 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.386 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.386 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.386 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.386 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.386 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.386 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.386 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.386 00:17:07.647 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.647 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.647 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.647 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.647 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.647 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.647 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.647 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.647 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.647 { 00:17:07.647 "cntlid": 61, 00:17:07.647 "qid": 0, 00:17:07.647 "state": "enabled", 00:17:07.647 "thread": "nvmf_tgt_poll_group_000", 00:17:07.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:07.647 "listen_address": { 00:17:07.647 "trtype": "TCP", 00:17:07.647 "adrfam": "IPv4", 00:17:07.647 "traddr": "10.0.0.2", 00:17:07.647 "trsvcid": "4420" 00:17:07.647 }, 00:17:07.647 "peer_address": { 00:17:07.647 "trtype": "TCP", 00:17:07.647 "adrfam": "IPv4", 00:17:07.647 "traddr": "10.0.0.1", 00:17:07.647 "trsvcid": "48976" 00:17:07.647 }, 00:17:07.647 "auth": { 00:17:07.647 "state": "completed", 00:17:07.647 "digest": "sha384", 00:17:07.647 "dhgroup": "ffdhe2048" 00:17:07.647 } 00:17:07.647 } 00:17:07.647 ]' 00:17:07.647 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.908 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.908 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.908 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:07.908 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.908 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.908 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.908 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.169 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:17:08.169 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:17:08.740 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.740 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:08.740 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.740 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.740 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.740 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.740 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:08.740 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.000 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:09.000 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.000 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:09.000 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:09.000 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:09.000 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.000 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:09.000 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.000 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.000 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.000 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:09.001 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.001 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.001 00:17:09.261 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.261 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.261 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.261 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.261 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.261 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.261 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.261 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.261 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.261 { 00:17:09.261 "cntlid": 63, 00:17:09.261 "qid": 0, 00:17:09.261 "state": "enabled", 00:17:09.261 "thread": "nvmf_tgt_poll_group_000", 00:17:09.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:09.261 "listen_address": { 00:17:09.261 "trtype": "TCP", 00:17:09.261 "adrfam": "IPv4", 00:17:09.261 "traddr": "10.0.0.2", 00:17:09.261 "trsvcid": "4420" 00:17:09.261 }, 00:17:09.261 "peer_address": { 00:17:09.261 "trtype": "TCP", 00:17:09.261 "adrfam": "IPv4", 00:17:09.261 "traddr": "10.0.0.1", 00:17:09.261 "trsvcid": "48994" 00:17:09.261 }, 00:17:09.261 "auth": { 00:17:09.261 "state": "completed", 00:17:09.261 "digest": "sha384", 00:17:09.261 "dhgroup": "ffdhe2048" 00:17:09.261 } 00:17:09.261 } 00:17:09.261 ]' 00:17:09.261 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.261 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.261 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.522 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.522 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.522 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.522 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.522 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.522 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:17:09.522 15:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.464 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.725 00:17:10.725 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.725 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.725 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.986 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.986 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.986 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.986 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.986 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.986 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.986 { 00:17:10.986 "cntlid": 65, 00:17:10.986 "qid": 0, 00:17:10.986 "state": "enabled", 00:17:10.986 "thread": "nvmf_tgt_poll_group_000", 00:17:10.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:10.986 "listen_address": { 00:17:10.986 "trtype": "TCP", 00:17:10.986 "adrfam": "IPv4", 00:17:10.986 "traddr": "10.0.0.2", 00:17:10.986 "trsvcid": "4420" 00:17:10.986 }, 00:17:10.986 "peer_address": { 00:17:10.986 "trtype": "TCP", 00:17:10.986 "adrfam": "IPv4", 00:17:10.986 "traddr": "10.0.0.1", 00:17:10.986 "trsvcid": "49010" 00:17:10.986 }, 00:17:10.986 "auth": { 00:17:10.986 "state": "completed", 00:17:10.986 "digest": "sha384", 00:17:10.986 "dhgroup": "ffdhe3072" 00:17:10.986 } 00:17:10.986 } 00:17:10.986 ]' 00:17:10.986 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.986 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.986 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.986 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:10.986 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.986 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.986 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.986 15:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.247 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:17:11.247 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:17:11.822 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.822 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:11.822 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.822 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.822 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.822 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.822 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:11.822 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:12.082 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:12.082 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.082 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:12.082 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:12.082 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:12.082 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.082 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.082 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.082 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.082 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.082 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.082 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.082 15:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.341 00:17:12.341 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.341 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.341 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.601 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.601 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.601 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.601 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.601 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.601 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.601 { 00:17:12.601 "cntlid": 67, 00:17:12.601 "qid": 0, 00:17:12.601 "state": "enabled", 00:17:12.601 "thread": "nvmf_tgt_poll_group_000", 00:17:12.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:12.601 "listen_address": { 00:17:12.601 "trtype": "TCP", 00:17:12.601 "adrfam": "IPv4", 00:17:12.601 "traddr": "10.0.0.2", 00:17:12.601 "trsvcid": "4420" 00:17:12.601 }, 00:17:12.601 "peer_address": { 00:17:12.601 "trtype": "TCP", 00:17:12.601 "adrfam": "IPv4", 00:17:12.601 "traddr": "10.0.0.1", 00:17:12.601 "trsvcid": "49026" 00:17:12.601 }, 00:17:12.601 "auth": { 00:17:12.601 "state": "completed", 00:17:12.601 "digest": "sha384", 00:17:12.601 "dhgroup": "ffdhe3072" 00:17:12.601 } 00:17:12.601 } 00:17:12.601 ]' 00:17:12.601 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.601 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.601 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.601 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:12.601 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.601 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.601 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.601 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.861 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:17:12.861 15:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:17:13.431 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.431 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:13.431 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.431 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.432 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.432 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.432 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:13.432 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:13.692 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:13.692 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.692 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.692 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:13.692 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:13.692 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.692 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.692 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.692 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.692 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.692 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.692 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.692 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.952 00:17:13.952 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.952 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.952 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.212 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.212 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.212 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.212 15:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.212 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.212 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.212 { 00:17:14.212 "cntlid": 69, 00:17:14.212 "qid": 0, 00:17:14.212 "state": "enabled", 00:17:14.212 "thread": "nvmf_tgt_poll_group_000", 00:17:14.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:14.212 "listen_address": { 00:17:14.212 "trtype": "TCP", 00:17:14.212 "adrfam": "IPv4", 00:17:14.212 "traddr": "10.0.0.2", 00:17:14.212 "trsvcid": "4420" 00:17:14.212 }, 00:17:14.212 "peer_address": { 00:17:14.212 "trtype": "TCP", 00:17:14.212 "adrfam": "IPv4", 00:17:14.212 "traddr": "10.0.0.1", 00:17:14.212 "trsvcid": "49050" 00:17:14.212 }, 00:17:14.212 "auth": { 00:17:14.212 "state": "completed", 00:17:14.212 "digest": "sha384", 00:17:14.212 "dhgroup": "ffdhe3072" 00:17:14.212 } 00:17:14.212 } 00:17:14.212 ]' 00:17:14.212 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.212 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.212 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.212 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:14.212 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.212 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.212 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.212 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.472 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:17:14.472 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:17:15.044 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.044 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:15.044 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.044 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.044 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.044 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.044 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.044 15:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.306 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:15.306 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.306 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.306 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:15.306 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:15.306 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.306 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:15.306 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.306 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.306 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.306 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:15.306 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.306 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.566 00:17:15.566 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.566 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.566 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.827 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.827 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.827 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.827 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.827 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.827 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.827 { 00:17:15.827 "cntlid": 71, 00:17:15.827 "qid": 0, 00:17:15.827 "state": "enabled", 00:17:15.827 "thread": "nvmf_tgt_poll_group_000", 00:17:15.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:15.827 "listen_address": { 00:17:15.827 "trtype": "TCP", 00:17:15.827 "adrfam": "IPv4", 00:17:15.827 "traddr": "10.0.0.2", 00:17:15.827 "trsvcid": "4420" 00:17:15.827 }, 00:17:15.827 "peer_address": { 00:17:15.827 "trtype": "TCP", 00:17:15.827 "adrfam": "IPv4", 00:17:15.827 "traddr": "10.0.0.1", 00:17:15.827 "trsvcid": "41180" 00:17:15.827 }, 00:17:15.827 "auth": { 00:17:15.827 "state": "completed", 00:17:15.827 "digest": "sha384", 00:17:15.827 "dhgroup": "ffdhe3072" 00:17:15.827 } 00:17:15.827 } 00:17:15.827 ]' 00:17:15.827 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.827 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.827 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.827 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.827 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.827 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.827 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.827 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.088 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:17:16.088 15:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:17:16.662 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.662 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:16.662 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.662 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.662 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.662 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.662 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.662 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:16.662 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:16.923 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:16.923 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.923 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.923 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:16.923 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:16.923 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.923 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.923 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.923 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.923 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.923 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.923 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.923 15:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.184 00:17:17.184 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.184 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.184 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.445 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.445 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.445 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.445 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.445 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.445 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.445 { 00:17:17.445 "cntlid": 73, 00:17:17.445 "qid": 0, 00:17:17.445 "state": "enabled", 00:17:17.445 "thread": "nvmf_tgt_poll_group_000", 00:17:17.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:17.445 "listen_address": { 00:17:17.445 "trtype": "TCP", 00:17:17.445 "adrfam": "IPv4", 00:17:17.445 "traddr": "10.0.0.2", 00:17:17.445 "trsvcid": "4420" 00:17:17.445 }, 00:17:17.445 "peer_address": { 00:17:17.445 "trtype": "TCP", 00:17:17.445 "adrfam": "IPv4", 00:17:17.445 "traddr": "10.0.0.1", 00:17:17.445 "trsvcid": "41218" 00:17:17.445 }, 00:17:17.445 "auth": { 00:17:17.445 "state": "completed", 00:17:17.445 "digest": "sha384", 00:17:17.445 "dhgroup": "ffdhe4096" 00:17:17.445 } 00:17:17.445 } 00:17:17.445 ]' 00:17:17.445 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.445 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.445 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.445 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:17.445 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.445 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.445 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.445 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.704 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:17:17.704 15:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:17:18.274 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.274 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:18.274 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.274 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.274 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.274 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.274 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.274 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.536 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:18.536 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.536 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.536 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:18.536 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:18.536 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.536 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.536 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.536 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.536 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.536 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.536 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.536 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.797 00:17:18.797 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.797 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.797 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.057 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.057 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.057 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.057 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.057 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.057 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.057 { 00:17:19.057 "cntlid": 75, 00:17:19.057 "qid": 0, 00:17:19.057 "state": "enabled", 00:17:19.057 "thread": "nvmf_tgt_poll_group_000", 00:17:19.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:19.057 "listen_address": { 00:17:19.057 "trtype": "TCP", 00:17:19.057 "adrfam": "IPv4", 00:17:19.057 "traddr": "10.0.0.2", 00:17:19.057 "trsvcid": "4420" 00:17:19.057 }, 00:17:19.057 "peer_address": { 00:17:19.057 "trtype": "TCP", 00:17:19.057 "adrfam": "IPv4", 00:17:19.057 "traddr": "10.0.0.1", 00:17:19.057 "trsvcid": "41248" 00:17:19.057 }, 00:17:19.057 "auth": { 00:17:19.057 "state": "completed", 00:17:19.057 "digest": "sha384", 00:17:19.057 "dhgroup": "ffdhe4096" 00:17:19.057 } 00:17:19.057 } 00:17:19.057 ]' 00:17:19.057 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.057 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.057 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.057 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:19.057 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.057 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.057 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.057 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.318 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:17:19.318 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:17:19.889 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.889 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:19.889 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.889 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.889 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.889 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.889 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:19.889 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:20.150 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:20.150 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.150 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.151 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:20.151 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:20.151 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.151 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.151 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.151 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.151 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.151 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.151 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.151 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.411 00:17:20.411 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.411 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.411 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.672 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.672 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.672 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.672 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.672 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.672 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.672 { 00:17:20.672 "cntlid": 77, 00:17:20.672 "qid": 0, 00:17:20.672 "state": "enabled", 00:17:20.672 "thread": "nvmf_tgt_poll_group_000", 00:17:20.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:20.672 "listen_address": { 00:17:20.672 "trtype": "TCP", 00:17:20.672 "adrfam": "IPv4", 00:17:20.672 "traddr": "10.0.0.2", 00:17:20.672 "trsvcid": "4420" 00:17:20.672 }, 00:17:20.672 "peer_address": { 00:17:20.672 "trtype": "TCP", 00:17:20.672 "adrfam": "IPv4", 00:17:20.672 "traddr": "10.0.0.1", 00:17:20.672 "trsvcid": "41280" 00:17:20.672 }, 00:17:20.672 "auth": { 00:17:20.672 "state": "completed", 00:17:20.672 "digest": "sha384", 00:17:20.672 "dhgroup": "ffdhe4096" 00:17:20.672 } 00:17:20.672 } 00:17:20.672 ]' 00:17:20.672 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.672 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.672 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.672 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:20.672 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.672 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.672 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.672 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.933 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:17:20.933 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:17:21.880 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.880 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:21.880 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.880 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.880 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.880 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.880 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.880 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.880 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:21.880 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.880 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.880 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:21.880 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:21.880 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.880 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:21.880 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.880 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.880 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.880 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:21.880 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.880 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.180 00:17:22.180 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.180 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.180 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.468 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.468 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.468 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.468 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.468 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.468 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.468 { 00:17:22.468 "cntlid": 79, 00:17:22.468 "qid": 0, 00:17:22.468 "state": "enabled", 00:17:22.468 "thread": "nvmf_tgt_poll_group_000", 00:17:22.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:22.468 "listen_address": { 00:17:22.468 "trtype": "TCP", 00:17:22.468 "adrfam": "IPv4", 00:17:22.468 "traddr": "10.0.0.2", 00:17:22.468 "trsvcid": "4420" 00:17:22.468 }, 00:17:22.468 "peer_address": { 00:17:22.468 "trtype": "TCP", 00:17:22.468 "adrfam": "IPv4", 00:17:22.468 "traddr": "10.0.0.1", 00:17:22.468 "trsvcid": "41310" 00:17:22.469 }, 00:17:22.469 "auth": { 00:17:22.469 "state": "completed", 00:17:22.469 "digest": "sha384", 00:17:22.469 "dhgroup": "ffdhe4096" 00:17:22.469 } 00:17:22.469 } 00:17:22.469 ]' 00:17:22.469 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.469 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.469 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.469 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:22.469 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.469 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.469 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.469 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.794 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:17:22.794 15:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:17:23.366 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.366 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:23.366 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.366 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.366 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.366 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.366 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.366 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:23.366 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:23.627 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:23.627 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.627 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.627 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:23.627 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:23.627 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.627 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.627 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.627 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.627 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.627 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.627 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.627 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.889 00:17:23.889 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.889 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.889 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.149 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.149 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.149 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.149 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.149 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.149 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.149 { 00:17:24.149 "cntlid": 81, 00:17:24.149 "qid": 0, 00:17:24.149 "state": "enabled", 00:17:24.149 "thread": "nvmf_tgt_poll_group_000", 00:17:24.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:24.149 "listen_address": { 00:17:24.149 "trtype": "TCP", 00:17:24.149 "adrfam": "IPv4", 00:17:24.149 "traddr": "10.0.0.2", 00:17:24.149 "trsvcid": "4420" 00:17:24.149 }, 00:17:24.149 "peer_address": { 00:17:24.149 "trtype": "TCP", 00:17:24.149 "adrfam": "IPv4", 00:17:24.149 "traddr": "10.0.0.1", 00:17:24.149 "trsvcid": "41342" 00:17:24.149 }, 00:17:24.149 "auth": { 00:17:24.149 "state": "completed", 00:17:24.149 "digest": "sha384", 00:17:24.149 "dhgroup": "ffdhe6144" 00:17:24.149 } 00:17:24.149 } 00:17:24.149 ]' 00:17:24.149 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.149 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.149 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.149 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:24.149 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.149 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.149 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.149 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.410 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:17:24.410 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:17:24.981 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.981 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:24.981 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.981 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.981 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.981 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.981 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:24.982 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:25.242 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:25.242 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.242 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.242 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:25.242 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:25.242 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.242 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.242 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.242 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.242 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.242 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.242 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.242 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.503 00:17:25.503 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.503 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.503 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.764 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.764 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.764 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.764 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.764 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.764 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.764 { 00:17:25.764 "cntlid": 83, 00:17:25.764 "qid": 0, 00:17:25.764 "state": "enabled", 00:17:25.764 "thread": "nvmf_tgt_poll_group_000", 00:17:25.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:25.764 "listen_address": { 00:17:25.764 "trtype": "TCP", 00:17:25.764 "adrfam": "IPv4", 00:17:25.764 "traddr": "10.0.0.2", 00:17:25.764 "trsvcid": "4420" 00:17:25.764 }, 00:17:25.764 "peer_address": { 00:17:25.764 "trtype": "TCP", 00:17:25.764 "adrfam": "IPv4", 00:17:25.764 "traddr": "10.0.0.1", 00:17:25.764 "trsvcid": "54118" 00:17:25.764 }, 00:17:25.764 "auth": { 00:17:25.764 "state": "completed", 00:17:25.764 "digest": "sha384", 00:17:25.764 "dhgroup": "ffdhe6144" 00:17:25.764 } 00:17:25.764 } 00:17:25.764 ]' 00:17:25.764 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.764 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.764 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.764 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:25.764 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.026 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.026 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.026 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.026 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:17:26.026 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:17:26.968 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.968 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:26.968 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.968 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.968 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.968 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.968 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:26.968 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:26.968 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:26.968 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.968 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.968 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:26.968 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:26.968 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.968 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.968 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.968 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.968 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.968 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.969 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.969 15:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.229 00:17:27.490 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.490 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.490 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.490 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.490 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.490 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.490 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.490 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.490 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.490 { 00:17:27.490 "cntlid": 85, 00:17:27.490 "qid": 0, 00:17:27.490 "state": "enabled", 00:17:27.490 "thread": "nvmf_tgt_poll_group_000", 00:17:27.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:27.490 "listen_address": { 00:17:27.490 "trtype": "TCP", 00:17:27.490 "adrfam": "IPv4", 00:17:27.490 "traddr": "10.0.0.2", 00:17:27.490 "trsvcid": "4420" 00:17:27.490 }, 00:17:27.490 "peer_address": { 00:17:27.490 "trtype": "TCP", 00:17:27.490 "adrfam": "IPv4", 00:17:27.490 "traddr": "10.0.0.1", 00:17:27.490 "trsvcid": "54154" 00:17:27.490 }, 00:17:27.490 "auth": { 00:17:27.490 "state": "completed", 00:17:27.490 "digest": "sha384", 00:17:27.490 "dhgroup": "ffdhe6144" 00:17:27.490 } 00:17:27.490 } 00:17:27.490 ]' 00:17:27.490 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.490 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.490 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.750 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:27.750 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.750 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.750 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.750 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.011 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:17:28.011 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:17:28.582 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.582 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:28.582 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.582 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.582 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.582 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.582 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.582 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.843 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:28.843 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.843 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.843 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:28.843 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:28.843 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.843 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:28.843 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.843 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.843 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.843 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:28.843 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.843 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.104 00:17:29.104 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.105 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.105 15:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.365 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.365 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.365 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.365 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.365 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.365 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.365 { 00:17:29.365 "cntlid": 87, 00:17:29.365 "qid": 0, 00:17:29.365 "state": "enabled", 00:17:29.365 "thread": "nvmf_tgt_poll_group_000", 00:17:29.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:29.365 "listen_address": { 00:17:29.365 "trtype": "TCP", 00:17:29.365 "adrfam": "IPv4", 00:17:29.365 "traddr": "10.0.0.2", 00:17:29.365 "trsvcid": "4420" 00:17:29.365 }, 00:17:29.365 "peer_address": { 00:17:29.365 "trtype": "TCP", 00:17:29.365 "adrfam": "IPv4", 00:17:29.365 "traddr": "10.0.0.1", 00:17:29.365 "trsvcid": "54186" 00:17:29.365 }, 00:17:29.365 "auth": { 00:17:29.365 "state": "completed", 00:17:29.365 "digest": "sha384", 00:17:29.365 "dhgroup": "ffdhe6144" 00:17:29.365 } 00:17:29.365 } 00:17:29.365 ]' 00:17:29.365 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.365 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.365 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.365 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:29.365 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.365 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.365 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.365 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.626 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:17:29.626 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:17:30.197 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.197 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:30.197 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.197 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.197 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.197 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.197 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.197 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:30.197 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:30.458 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:30.458 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.458 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.458 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:30.458 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:30.458 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.458 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.458 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.458 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.458 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.458 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.458 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.458 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.027 00:17:31.027 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.027 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.027 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.027 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.027 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.027 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.027 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.027 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.027 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.027 { 00:17:31.027 "cntlid": 89, 00:17:31.027 "qid": 0, 00:17:31.027 "state": "enabled", 00:17:31.027 "thread": "nvmf_tgt_poll_group_000", 00:17:31.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:31.027 "listen_address": { 00:17:31.027 "trtype": "TCP", 00:17:31.027 "adrfam": "IPv4", 00:17:31.027 "traddr": "10.0.0.2", 00:17:31.027 "trsvcid": "4420" 00:17:31.027 }, 00:17:31.027 "peer_address": { 00:17:31.027 "trtype": "TCP", 00:17:31.027 "adrfam": "IPv4", 00:17:31.027 "traddr": "10.0.0.1", 00:17:31.027 "trsvcid": "54222" 00:17:31.027 }, 00:17:31.027 "auth": { 00:17:31.027 "state": "completed", 00:17:31.027 "digest": "sha384", 00:17:31.027 "dhgroup": "ffdhe8192" 00:17:31.027 } 00:17:31.027 } 00:17:31.027 ]' 00:17:31.027 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.286 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.286 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.286 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:31.286 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.286 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.286 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.286 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.546 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:17:31.546 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:17:32.117 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.117 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:32.117 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.117 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.117 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.117 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.117 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:32.117 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:32.378 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:32.378 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.378 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.378 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:32.378 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:32.378 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.378 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.378 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.378 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.378 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.378 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.378 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.378 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.639 00:17:32.639 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.639 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.639 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.900 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.900 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.900 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.900 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.900 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.900 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.900 { 00:17:32.900 "cntlid": 91, 00:17:32.900 "qid": 0, 00:17:32.900 "state": "enabled", 00:17:32.900 "thread": "nvmf_tgt_poll_group_000", 00:17:32.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:32.900 "listen_address": { 00:17:32.900 "trtype": "TCP", 00:17:32.900 "adrfam": "IPv4", 00:17:32.900 "traddr": "10.0.0.2", 00:17:32.900 "trsvcid": "4420" 00:17:32.900 }, 00:17:32.900 "peer_address": { 00:17:32.900 "trtype": "TCP", 00:17:32.900 "adrfam": "IPv4", 00:17:32.900 "traddr": "10.0.0.1", 00:17:32.900 "trsvcid": "54244" 00:17:32.900 }, 00:17:32.900 "auth": { 00:17:32.900 "state": "completed", 00:17:32.900 "digest": "sha384", 00:17:32.900 "dhgroup": "ffdhe8192" 00:17:32.900 } 00:17:32.900 } 00:17:32.900 ]' 00:17:32.900 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.900 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.900 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.162 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:33.162 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.162 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.162 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.162 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.162 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:17:33.162 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:17:34.103 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.103 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:34.103 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.103 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.103 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.103 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.103 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:34.103 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:34.103 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:34.103 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.103 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:34.103 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:34.103 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:34.103 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.103 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.103 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.103 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.103 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.103 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.103 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.103 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.674 00:17:34.674 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.674 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.674 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.674 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.674 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.674 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.674 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.935 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.935 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.935 { 00:17:34.935 "cntlid": 93, 00:17:34.935 "qid": 0, 00:17:34.935 "state": "enabled", 00:17:34.935 "thread": "nvmf_tgt_poll_group_000", 00:17:34.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:34.935 "listen_address": { 00:17:34.935 "trtype": "TCP", 00:17:34.935 "adrfam": "IPv4", 00:17:34.935 "traddr": "10.0.0.2", 00:17:34.935 "trsvcid": "4420" 00:17:34.935 }, 00:17:34.935 "peer_address": { 00:17:34.935 "trtype": "TCP", 00:17:34.935 "adrfam": "IPv4", 00:17:34.935 "traddr": "10.0.0.1", 00:17:34.935 "trsvcid": "54280" 00:17:34.935 }, 00:17:34.935 "auth": { 00:17:34.935 "state": "completed", 00:17:34.935 "digest": "sha384", 00:17:34.935 "dhgroup": "ffdhe8192" 00:17:34.935 } 00:17:34.935 } 00:17:34.935 ]' 00:17:34.935 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.935 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.935 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.935 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:34.935 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.935 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.935 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.935 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.195 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:17:35.195 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:17:35.766 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.766 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:35.766 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.766 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.766 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.766 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.766 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.766 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.027 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:36.028 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.028 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:36.028 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:36.028 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:36.028 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.028 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:36.028 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.028 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.028 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.028 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:36.028 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.028 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.599 00:17:36.599 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.599 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.599 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.599 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.599 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.599 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.599 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.599 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.599 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.599 { 00:17:36.599 "cntlid": 95, 00:17:36.599 "qid": 0, 00:17:36.599 "state": "enabled", 00:17:36.599 "thread": "nvmf_tgt_poll_group_000", 00:17:36.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:36.599 "listen_address": { 00:17:36.599 "trtype": "TCP", 00:17:36.599 "adrfam": "IPv4", 00:17:36.599 "traddr": "10.0.0.2", 00:17:36.599 "trsvcid": "4420" 00:17:36.599 }, 00:17:36.599 "peer_address": { 00:17:36.599 "trtype": "TCP", 00:17:36.599 "adrfam": "IPv4", 00:17:36.599 "traddr": "10.0.0.1", 00:17:36.599 "trsvcid": "42124" 00:17:36.599 }, 00:17:36.599 "auth": { 00:17:36.599 "state": "completed", 00:17:36.599 "digest": "sha384", 00:17:36.599 "dhgroup": "ffdhe8192" 00:17:36.599 } 00:17:36.599 } 00:17:36.599 ]' 00:17:36.599 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.599 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.599 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.860 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.860 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.860 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.860 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.860 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.120 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:17:37.120 15:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:17:37.691 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.691 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:37.691 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.691 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.691 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.691 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:37.691 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.691 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.691 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:37.691 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:37.953 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:37.953 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.953 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:37.953 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:37.953 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:37.953 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.953 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.953 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.953 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.953 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.953 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.953 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.954 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.215 00:17:38.215 15:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.215 15:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.215 15:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.475 15:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.475 15:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.475 15:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.475 15:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.475 15:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.475 15:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.475 { 00:17:38.475 "cntlid": 97, 00:17:38.475 "qid": 0, 00:17:38.475 "state": "enabled", 00:17:38.475 "thread": "nvmf_tgt_poll_group_000", 00:17:38.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:38.475 "listen_address": { 00:17:38.475 "trtype": "TCP", 00:17:38.475 "adrfam": "IPv4", 00:17:38.475 "traddr": "10.0.0.2", 00:17:38.475 "trsvcid": "4420" 00:17:38.475 }, 00:17:38.475 "peer_address": { 00:17:38.475 "trtype": "TCP", 00:17:38.475 "adrfam": "IPv4", 00:17:38.475 "traddr": "10.0.0.1", 00:17:38.475 "trsvcid": "42164" 00:17:38.475 }, 00:17:38.475 "auth": { 00:17:38.475 "state": "completed", 00:17:38.475 "digest": "sha512", 00:17:38.475 "dhgroup": "null" 00:17:38.475 } 00:17:38.475 } 00:17:38.475 ]' 00:17:38.475 15:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.475 15:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.475 15:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.475 15:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:38.475 15:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.475 15:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.475 15:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.475 15:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.736 15:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:17:38.736 15:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:17:39.308 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.308 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:39.308 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.308 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.308 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.308 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.308 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:39.308 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:39.568 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:39.568 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.568 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.568 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:39.568 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:39.568 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.568 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.568 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.568 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.568 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.568 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.568 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.568 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.828 00:17:39.828 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.828 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.828 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.828 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.828 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.828 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.828 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.089 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.089 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.089 { 00:17:40.089 "cntlid": 99, 00:17:40.089 "qid": 0, 00:17:40.089 "state": "enabled", 00:17:40.089 "thread": "nvmf_tgt_poll_group_000", 00:17:40.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:40.089 "listen_address": { 00:17:40.089 "trtype": "TCP", 00:17:40.089 "adrfam": "IPv4", 00:17:40.089 "traddr": "10.0.0.2", 00:17:40.089 "trsvcid": "4420" 00:17:40.089 }, 00:17:40.089 "peer_address": { 00:17:40.089 "trtype": "TCP", 00:17:40.089 "adrfam": "IPv4", 00:17:40.089 "traddr": "10.0.0.1", 00:17:40.089 "trsvcid": "42198" 00:17:40.089 }, 00:17:40.089 "auth": { 00:17:40.089 "state": "completed", 00:17:40.089 "digest": "sha512", 00:17:40.089 "dhgroup": "null" 00:17:40.089 } 00:17:40.089 } 00:17:40.089 ]' 00:17:40.089 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.089 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.089 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.089 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:40.089 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.089 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.089 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.089 15:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.349 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:17:40.349 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:17:40.920 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.920 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:40.920 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.920 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.920 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.920 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.920 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.920 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:41.181 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:41.181 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.181 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.181 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:41.181 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:41.181 15:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.181 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.181 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.181 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.181 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.181 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.181 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.181 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.442 00:17:41.442 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.442 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.442 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.703 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.703 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.703 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.703 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.703 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.703 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.703 { 00:17:41.703 "cntlid": 101, 00:17:41.703 "qid": 0, 00:17:41.703 "state": "enabled", 00:17:41.703 "thread": "nvmf_tgt_poll_group_000", 00:17:41.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:41.703 "listen_address": { 00:17:41.703 "trtype": "TCP", 00:17:41.703 "adrfam": "IPv4", 00:17:41.703 "traddr": "10.0.0.2", 00:17:41.703 "trsvcid": "4420" 00:17:41.703 }, 00:17:41.703 "peer_address": { 00:17:41.703 "trtype": "TCP", 00:17:41.703 "adrfam": "IPv4", 00:17:41.703 "traddr": "10.0.0.1", 00:17:41.703 "trsvcid": "42220" 00:17:41.703 }, 00:17:41.703 "auth": { 00:17:41.703 "state": "completed", 00:17:41.703 "digest": "sha512", 00:17:41.703 "dhgroup": "null" 00:17:41.703 } 00:17:41.703 } 00:17:41.703 ]' 00:17:41.703 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.703 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.703 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.703 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:41.703 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.703 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.703 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.703 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.964 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:17:41.964 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:17:42.535 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.535 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:42.535 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.535 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.535 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.535 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.535 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:42.535 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:42.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:42.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:42.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:42.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:42.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.796 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.057 00:17:43.057 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.057 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.057 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.057 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.058 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.058 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.058 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.058 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.058 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.058 { 00:17:43.058 "cntlid": 103, 00:17:43.058 "qid": 0, 00:17:43.058 "state": "enabled", 00:17:43.058 "thread": "nvmf_tgt_poll_group_000", 00:17:43.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:43.058 "listen_address": { 00:17:43.058 "trtype": "TCP", 00:17:43.058 "adrfam": "IPv4", 00:17:43.058 "traddr": "10.0.0.2", 00:17:43.058 "trsvcid": "4420" 00:17:43.058 }, 00:17:43.058 "peer_address": { 00:17:43.058 "trtype": "TCP", 00:17:43.058 "adrfam": "IPv4", 00:17:43.058 "traddr": "10.0.0.1", 00:17:43.058 "trsvcid": "42244" 00:17:43.058 }, 00:17:43.058 "auth": { 00:17:43.058 "state": "completed", 00:17:43.058 "digest": "sha512", 00:17:43.058 "dhgroup": "null" 00:17:43.058 } 00:17:43.058 } 00:17:43.058 ]' 00:17:43.058 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.319 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.319 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.319 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:43.319 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.319 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.319 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.319 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.579 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:17:43.579 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:17:44.150 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.150 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:44.150 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.150 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.150 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.150 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.150 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.150 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:44.150 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:44.411 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:44.411 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.411 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:44.411 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:44.411 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:44.411 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.411 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.411 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.411 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.411 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.412 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.412 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.412 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.673 00:17:44.673 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.673 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.673 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.673 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.934 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.934 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.934 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.934 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.934 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.934 { 00:17:44.934 "cntlid": 105, 00:17:44.934 "qid": 0, 00:17:44.934 "state": "enabled", 00:17:44.934 "thread": "nvmf_tgt_poll_group_000", 00:17:44.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:44.934 "listen_address": { 00:17:44.934 "trtype": "TCP", 00:17:44.934 "adrfam": "IPv4", 00:17:44.934 "traddr": "10.0.0.2", 00:17:44.934 "trsvcid": "4420" 00:17:44.934 }, 00:17:44.934 "peer_address": { 00:17:44.934 "trtype": "TCP", 00:17:44.934 "adrfam": "IPv4", 00:17:44.934 "traddr": "10.0.0.1", 00:17:44.934 "trsvcid": "42280" 00:17:44.934 }, 00:17:44.934 "auth": { 00:17:44.934 "state": "completed", 00:17:44.934 "digest": "sha512", 00:17:44.934 "dhgroup": "ffdhe2048" 00:17:44.934 } 00:17:44.934 } 00:17:44.934 ]' 00:17:44.934 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.934 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.934 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.934 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:44.934 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.934 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.934 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.934 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.195 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:17:45.195 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:17:45.767 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.767 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:45.767 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.767 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.767 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.767 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.767 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.767 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.028 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:46.028 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.028 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.028 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:46.028 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:46.028 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.028 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.028 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.028 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.028 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.028 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.028 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.028 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.290 00:17:46.290 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.290 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.290 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.290 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.290 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.290 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.290 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.552 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.552 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.552 { 00:17:46.552 "cntlid": 107, 00:17:46.552 "qid": 0, 00:17:46.552 "state": "enabled", 00:17:46.552 "thread": "nvmf_tgt_poll_group_000", 00:17:46.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:46.552 "listen_address": { 00:17:46.552 "trtype": "TCP", 00:17:46.552 "adrfam": "IPv4", 00:17:46.552 "traddr": "10.0.0.2", 00:17:46.552 "trsvcid": "4420" 00:17:46.552 }, 00:17:46.552 "peer_address": { 00:17:46.552 "trtype": "TCP", 00:17:46.552 "adrfam": "IPv4", 00:17:46.552 "traddr": "10.0.0.1", 00:17:46.552 "trsvcid": "41976" 00:17:46.552 }, 00:17:46.552 "auth": { 00:17:46.552 "state": "completed", 00:17:46.552 "digest": "sha512", 00:17:46.552 "dhgroup": "ffdhe2048" 00:17:46.552 } 00:17:46.552 } 00:17:46.552 ]' 00:17:46.552 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.552 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.552 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.552 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.552 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.552 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.552 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.552 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.813 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:17:46.813 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:17:47.385 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.385 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:47.385 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.385 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.385 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.385 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.385 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.385 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.646 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:47.646 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.646 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.646 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:47.646 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:47.646 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.646 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.646 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.646 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.646 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.646 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.646 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.646 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.907 00:17:47.907 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.907 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.907 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.907 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.907 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.907 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.907 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.907 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.907 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.907 { 00:17:47.907 "cntlid": 109, 00:17:47.907 "qid": 0, 00:17:47.907 "state": "enabled", 00:17:47.907 "thread": "nvmf_tgt_poll_group_000", 00:17:47.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:47.907 "listen_address": { 00:17:47.907 "trtype": "TCP", 00:17:47.907 "adrfam": "IPv4", 00:17:47.907 "traddr": "10.0.0.2", 00:17:47.907 "trsvcid": "4420" 00:17:47.907 }, 00:17:47.907 "peer_address": { 00:17:47.907 "trtype": "TCP", 00:17:47.907 "adrfam": "IPv4", 00:17:47.907 "traddr": "10.0.0.1", 00:17:47.907 "trsvcid": "42006" 00:17:47.907 }, 00:17:47.907 "auth": { 00:17:47.907 "state": "completed", 00:17:47.907 "digest": "sha512", 00:17:47.907 "dhgroup": "ffdhe2048" 00:17:47.907 } 00:17:47.907 } 00:17:47.907 ]' 00:17:47.907 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.169 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.169 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.169 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:48.169 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.169 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.169 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.169 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.430 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:17:48.430 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:17:49.001 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.001 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:49.001 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.001 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.001 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.001 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.001 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:49.001 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:49.262 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:49.262 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.262 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.262 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:49.262 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:49.262 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.262 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:49.262 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.262 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.262 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.262 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:49.262 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.262 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.523 00:17:49.523 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.523 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.523 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.523 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.523 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.523 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.523 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.523 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.523 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.523 { 00:17:49.523 "cntlid": 111, 00:17:49.523 "qid": 0, 00:17:49.523 "state": "enabled", 00:17:49.523 "thread": "nvmf_tgt_poll_group_000", 00:17:49.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:49.523 "listen_address": { 00:17:49.523 "trtype": "TCP", 00:17:49.523 "adrfam": "IPv4", 00:17:49.523 "traddr": "10.0.0.2", 00:17:49.523 "trsvcid": "4420" 00:17:49.523 }, 00:17:49.523 "peer_address": { 00:17:49.523 "trtype": "TCP", 00:17:49.523 "adrfam": "IPv4", 00:17:49.523 "traddr": "10.0.0.1", 00:17:49.523 "trsvcid": "42028" 00:17:49.523 }, 00:17:49.523 "auth": { 00:17:49.523 "state": "completed", 00:17:49.523 "digest": "sha512", 00:17:49.523 "dhgroup": "ffdhe2048" 00:17:49.523 } 00:17:49.523 } 00:17:49.523 ]' 00:17:49.523 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.784 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.784 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.784 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:49.784 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.784 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.784 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.784 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.045 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:17:50.045 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:17:50.616 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.616 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:50.616 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.616 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.616 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.616 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.616 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.616 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:50.616 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:50.876 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:50.876 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.876 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.876 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:50.876 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:50.876 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.876 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.876 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.876 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.876 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.876 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.876 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.876 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.876 00:17:51.137 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.137 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.137 15:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.137 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.137 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.137 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.137 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.137 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.137 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.137 { 00:17:51.137 "cntlid": 113, 00:17:51.137 "qid": 0, 00:17:51.137 "state": "enabled", 00:17:51.137 "thread": "nvmf_tgt_poll_group_000", 00:17:51.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:51.137 "listen_address": { 00:17:51.137 "trtype": "TCP", 00:17:51.137 "adrfam": "IPv4", 00:17:51.137 "traddr": "10.0.0.2", 00:17:51.137 "trsvcid": "4420" 00:17:51.137 }, 00:17:51.137 "peer_address": { 00:17:51.137 "trtype": "TCP", 00:17:51.137 "adrfam": "IPv4", 00:17:51.137 "traddr": "10.0.0.1", 00:17:51.137 "trsvcid": "42052" 00:17:51.137 }, 00:17:51.137 "auth": { 00:17:51.137 "state": "completed", 00:17:51.137 "digest": "sha512", 00:17:51.137 "dhgroup": "ffdhe3072" 00:17:51.137 } 00:17:51.137 } 00:17:51.137 ]' 00:17:51.137 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.397 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.397 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.398 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:51.398 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.398 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.398 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.398 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.658 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:17:51.658 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:17:52.228 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.228 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:52.228 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.228 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.228 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.229 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.229 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:52.229 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:52.489 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:52.489 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.489 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.489 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:52.489 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:52.489 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.489 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.489 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.489 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.489 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.489 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.489 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.489 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.750 00:17:52.750 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.750 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.750 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.750 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.750 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.750 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.750 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.750 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.750 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.750 { 00:17:52.750 "cntlid": 115, 00:17:52.751 "qid": 0, 00:17:52.751 "state": "enabled", 00:17:52.751 "thread": "nvmf_tgt_poll_group_000", 00:17:52.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:52.751 "listen_address": { 00:17:52.751 "trtype": "TCP", 00:17:52.751 "adrfam": "IPv4", 00:17:52.751 "traddr": "10.0.0.2", 00:17:52.751 "trsvcid": "4420" 00:17:52.751 }, 00:17:52.751 "peer_address": { 00:17:52.751 "trtype": "TCP", 00:17:52.751 "adrfam": "IPv4", 00:17:52.751 "traddr": "10.0.0.1", 00:17:52.751 "trsvcid": "42066" 00:17:52.751 }, 00:17:52.751 "auth": { 00:17:52.751 "state": "completed", 00:17:52.751 "digest": "sha512", 00:17:52.751 "dhgroup": "ffdhe3072" 00:17:52.751 } 00:17:52.751 } 00:17:52.751 ]' 00:17:53.011 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.011 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.011 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.011 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:53.011 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.011 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.011 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.011 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.273 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:17:53.273 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:17:53.846 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.846 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:53.846 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.846 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.846 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.846 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.846 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.846 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.107 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:54.107 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.107 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.107 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:54.107 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:54.107 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.107 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.107 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.107 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.107 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.107 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.107 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.107 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.368 00:17:54.368 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.368 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.368 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.368 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.368 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.368 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.368 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.368 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.368 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.368 { 00:17:54.368 "cntlid": 117, 00:17:54.368 "qid": 0, 00:17:54.368 "state": "enabled", 00:17:54.368 "thread": "nvmf_tgt_poll_group_000", 00:17:54.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:54.368 "listen_address": { 00:17:54.368 "trtype": "TCP", 00:17:54.368 "adrfam": "IPv4", 00:17:54.368 "traddr": "10.0.0.2", 00:17:54.368 "trsvcid": "4420" 00:17:54.368 }, 00:17:54.368 "peer_address": { 00:17:54.368 "trtype": "TCP", 00:17:54.368 "adrfam": "IPv4", 00:17:54.368 "traddr": "10.0.0.1", 00:17:54.368 "trsvcid": "42084" 00:17:54.368 }, 00:17:54.368 "auth": { 00:17:54.368 "state": "completed", 00:17:54.368 "digest": "sha512", 00:17:54.368 "dhgroup": "ffdhe3072" 00:17:54.368 } 00:17:54.368 } 00:17:54.368 ]' 00:17:54.368 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.629 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.629 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.629 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.629 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.629 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.629 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.629 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.890 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:17:54.890 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:17:55.462 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.462 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:55.462 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.462 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.462 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.462 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.462 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:55.462 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:55.723 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:55.723 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.723 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.723 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:55.723 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:55.723 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.723 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:55.723 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.723 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.723 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.723 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:55.723 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.723 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.984 00:17:55.984 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.984 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.984 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.984 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.984 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.984 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.984 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.984 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.984 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.984 { 00:17:55.984 "cntlid": 119, 00:17:55.984 "qid": 0, 00:17:55.984 "state": "enabled", 00:17:55.984 "thread": "nvmf_tgt_poll_group_000", 00:17:55.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:55.984 "listen_address": { 00:17:55.984 "trtype": "TCP", 00:17:55.984 "adrfam": "IPv4", 00:17:55.984 "traddr": "10.0.0.2", 00:17:55.984 "trsvcid": "4420" 00:17:55.984 }, 00:17:55.984 "peer_address": { 00:17:55.984 "trtype": "TCP", 00:17:55.984 "adrfam": "IPv4", 00:17:55.984 "traddr": "10.0.0.1", 00:17:55.984 "trsvcid": "49042" 00:17:55.984 }, 00:17:55.984 "auth": { 00:17:55.984 "state": "completed", 00:17:55.984 "digest": "sha512", 00:17:55.984 "dhgroup": "ffdhe3072" 00:17:55.984 } 00:17:55.984 } 00:17:55.984 ]' 00:17:55.984 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.245 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.245 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.245 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:56.245 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.245 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.245 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.245 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.504 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:17:56.504 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:17:57.075 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.075 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:57.075 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.076 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.076 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.076 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.076 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.076 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:57.076 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:57.337 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:57.337 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.337 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.337 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:57.337 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:57.337 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.337 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.337 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.337 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.337 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.337 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.337 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.337 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.598 00:17:57.598 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.598 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.598 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.598 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.598 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.598 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.598 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.598 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.598 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.598 { 00:17:57.598 "cntlid": 121, 00:17:57.598 "qid": 0, 00:17:57.598 "state": "enabled", 00:17:57.598 "thread": "nvmf_tgt_poll_group_000", 00:17:57.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:57.598 "listen_address": { 00:17:57.598 "trtype": "TCP", 00:17:57.598 "adrfam": "IPv4", 00:17:57.598 "traddr": "10.0.0.2", 00:17:57.598 "trsvcid": "4420" 00:17:57.598 }, 00:17:57.598 "peer_address": { 00:17:57.598 "trtype": "TCP", 00:17:57.598 "adrfam": "IPv4", 00:17:57.598 "traddr": "10.0.0.1", 00:17:57.598 "trsvcid": "49080" 00:17:57.598 }, 00:17:57.598 "auth": { 00:17:57.598 "state": "completed", 00:17:57.598 "digest": "sha512", 00:17:57.598 "dhgroup": "ffdhe4096" 00:17:57.598 } 00:17:57.598 } 00:17:57.598 ]' 00:17:57.859 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.859 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.859 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.859 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:57.859 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.859 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.859 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.859 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.120 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:17:58.120 15:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:17:58.693 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.693 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:58.693 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.693 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.693 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.693 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.693 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:58.693 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:58.954 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:58.954 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.954 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.954 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:58.954 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:58.954 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.954 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.954 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.954 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.954 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.954 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.954 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.954 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.214 00:17:59.214 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.214 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.214 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.475 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.475 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.475 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.475 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.475 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.475 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.475 { 00:17:59.475 "cntlid": 123, 00:17:59.475 "qid": 0, 00:17:59.475 "state": "enabled", 00:17:59.475 "thread": "nvmf_tgt_poll_group_000", 00:17:59.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:59.475 "listen_address": { 00:17:59.475 "trtype": "TCP", 00:17:59.475 "adrfam": "IPv4", 00:17:59.475 "traddr": "10.0.0.2", 00:17:59.475 "trsvcid": "4420" 00:17:59.475 }, 00:17:59.475 "peer_address": { 00:17:59.475 "trtype": "TCP", 00:17:59.475 "adrfam": "IPv4", 00:17:59.475 "traddr": "10.0.0.1", 00:17:59.475 "trsvcid": "49104" 00:17:59.475 }, 00:17:59.475 "auth": { 00:17:59.475 "state": "completed", 00:17:59.475 "digest": "sha512", 00:17:59.475 "dhgroup": "ffdhe4096" 00:17:59.475 } 00:17:59.475 } 00:17:59.475 ]' 00:17:59.475 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.475 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.475 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.475 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:59.475 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.475 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.475 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.475 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.735 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:17:59.735 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:18:00.383 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.383 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:00.383 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.383 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.383 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.383 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.383 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:00.383 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:00.643 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:00.643 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.643 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.643 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:00.643 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:00.643 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.643 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.643 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.643 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.643 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.643 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.643 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.643 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.904 00:18:00.904 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.904 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.904 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.904 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.904 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.164 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.164 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.164 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.164 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.164 { 00:18:01.164 "cntlid": 125, 00:18:01.164 "qid": 0, 00:18:01.164 "state": "enabled", 00:18:01.164 "thread": "nvmf_tgt_poll_group_000", 00:18:01.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:01.164 "listen_address": { 00:18:01.164 "trtype": "TCP", 00:18:01.164 "adrfam": "IPv4", 00:18:01.164 "traddr": "10.0.0.2", 00:18:01.164 "trsvcid": "4420" 00:18:01.164 }, 00:18:01.164 "peer_address": { 00:18:01.164 "trtype": "TCP", 00:18:01.164 "adrfam": "IPv4", 00:18:01.164 "traddr": "10.0.0.1", 00:18:01.164 "trsvcid": "49120" 00:18:01.164 }, 00:18:01.164 "auth": { 00:18:01.164 "state": "completed", 00:18:01.164 "digest": "sha512", 00:18:01.164 "dhgroup": "ffdhe4096" 00:18:01.164 } 00:18:01.164 } 00:18:01.164 ]' 00:18:01.164 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.164 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.164 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.164 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:01.164 15:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.164 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.164 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.164 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.424 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:18:01.424 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:18:01.995 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.995 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:01.995 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.995 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.995 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.995 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.995 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.995 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.255 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:02.255 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.255 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.255 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:02.255 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:02.255 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.255 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:02.255 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.255 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.255 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.255 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:02.256 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.256 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.516 00:18:02.516 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.516 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.516 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.777 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.777 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.777 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.777 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.777 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.777 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.777 { 00:18:02.777 "cntlid": 127, 00:18:02.777 "qid": 0, 00:18:02.777 "state": "enabled", 00:18:02.777 "thread": "nvmf_tgt_poll_group_000", 00:18:02.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:02.777 "listen_address": { 00:18:02.777 "trtype": "TCP", 00:18:02.777 "adrfam": "IPv4", 00:18:02.777 "traddr": "10.0.0.2", 00:18:02.777 "trsvcid": "4420" 00:18:02.777 }, 00:18:02.777 "peer_address": { 00:18:02.777 "trtype": "TCP", 00:18:02.777 "adrfam": "IPv4", 00:18:02.777 "traddr": "10.0.0.1", 00:18:02.777 "trsvcid": "49150" 00:18:02.777 }, 00:18:02.777 "auth": { 00:18:02.777 "state": "completed", 00:18:02.777 "digest": "sha512", 00:18:02.777 "dhgroup": "ffdhe4096" 00:18:02.777 } 00:18:02.777 } 00:18:02.777 ]' 00:18:02.777 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.777 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.777 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.777 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:02.777 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.777 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.777 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.777 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.038 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:18:03.038 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:18:03.610 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.610 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:03.610 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.610 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.610 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.610 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.610 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.610 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:03.610 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:03.871 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:03.871 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.871 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.871 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:03.871 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:03.871 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.871 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.871 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.871 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.871 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.871 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.871 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.871 15:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.132 00:18:04.132 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.132 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.132 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.393 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.393 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.393 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.393 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.393 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.393 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.393 { 00:18:04.393 "cntlid": 129, 00:18:04.393 "qid": 0, 00:18:04.393 "state": "enabled", 00:18:04.393 "thread": "nvmf_tgt_poll_group_000", 00:18:04.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:04.393 "listen_address": { 00:18:04.393 "trtype": "TCP", 00:18:04.393 "adrfam": "IPv4", 00:18:04.393 "traddr": "10.0.0.2", 00:18:04.393 "trsvcid": "4420" 00:18:04.393 }, 00:18:04.393 "peer_address": { 00:18:04.393 "trtype": "TCP", 00:18:04.393 "adrfam": "IPv4", 00:18:04.393 "traddr": "10.0.0.1", 00:18:04.393 "trsvcid": "49174" 00:18:04.393 }, 00:18:04.393 "auth": { 00:18:04.393 "state": "completed", 00:18:04.393 "digest": "sha512", 00:18:04.393 "dhgroup": "ffdhe6144" 00:18:04.393 } 00:18:04.393 } 00:18:04.393 ]' 00:18:04.393 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.393 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.393 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.393 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:04.393 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.653 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.653 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.653 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.653 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:18:04.653 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:18:05.230 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.491 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:05.491 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.491 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.491 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.491 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.491 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.491 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.491 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:05.491 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.491 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.491 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:05.491 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:05.491 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.491 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.491 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.491 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.491 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.491 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.491 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.491 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.061 00:18:06.061 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.061 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.061 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.061 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.061 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.061 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.061 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.061 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.061 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.061 { 00:18:06.061 "cntlid": 131, 00:18:06.061 "qid": 0, 00:18:06.061 "state": "enabled", 00:18:06.061 "thread": "nvmf_tgt_poll_group_000", 00:18:06.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:06.061 "listen_address": { 00:18:06.061 "trtype": "TCP", 00:18:06.061 "adrfam": "IPv4", 00:18:06.061 "traddr": "10.0.0.2", 00:18:06.061 "trsvcid": "4420" 00:18:06.061 }, 00:18:06.061 "peer_address": { 00:18:06.061 "trtype": "TCP", 00:18:06.061 "adrfam": "IPv4", 00:18:06.061 "traddr": "10.0.0.1", 00:18:06.061 "trsvcid": "45738" 00:18:06.061 }, 00:18:06.061 "auth": { 00:18:06.061 "state": "completed", 00:18:06.061 "digest": "sha512", 00:18:06.061 "dhgroup": "ffdhe6144" 00:18:06.061 } 00:18:06.061 } 00:18:06.061 ]' 00:18:06.061 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.061 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.061 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.061 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.321 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.321 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.321 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.321 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.322 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:18:06.322 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:18:07.263 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.263 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:07.263 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.263 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.263 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.263 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.263 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.263 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.263 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:07.263 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.263 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.263 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:07.263 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:07.263 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.263 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.263 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.263 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.263 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.263 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.263 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.263 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.524 00:18:07.524 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.524 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.524 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.785 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.785 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.785 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.785 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.785 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.785 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.785 { 00:18:07.785 "cntlid": 133, 00:18:07.785 "qid": 0, 00:18:07.785 "state": "enabled", 00:18:07.785 "thread": "nvmf_tgt_poll_group_000", 00:18:07.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:07.785 "listen_address": { 00:18:07.785 "trtype": "TCP", 00:18:07.785 "adrfam": "IPv4", 00:18:07.785 "traddr": "10.0.0.2", 00:18:07.785 "trsvcid": "4420" 00:18:07.785 }, 00:18:07.785 "peer_address": { 00:18:07.785 "trtype": "TCP", 00:18:07.785 "adrfam": "IPv4", 00:18:07.785 "traddr": "10.0.0.1", 00:18:07.785 "trsvcid": "45768" 00:18:07.785 }, 00:18:07.785 "auth": { 00:18:07.785 "state": "completed", 00:18:07.785 "digest": "sha512", 00:18:07.785 "dhgroup": "ffdhe6144" 00:18:07.785 } 00:18:07.785 } 00:18:07.785 ]' 00:18:07.785 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.785 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.785 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.785 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:07.785 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.046 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.046 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.046 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.046 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:18:08.046 15:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:18:08.988 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.988 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:08.988 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.988 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.988 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.988 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.988 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:08.988 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:08.988 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:08.988 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.988 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.988 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:08.988 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:08.988 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.988 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:08.988 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.988 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.988 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.989 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.989 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.989 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.249 00:18:09.249 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.249 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.249 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.509 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.509 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.509 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.509 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.509 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.509 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.509 { 00:18:09.509 "cntlid": 135, 00:18:09.509 "qid": 0, 00:18:09.509 "state": "enabled", 00:18:09.509 "thread": "nvmf_tgt_poll_group_000", 00:18:09.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:09.509 "listen_address": { 00:18:09.509 "trtype": "TCP", 00:18:09.509 "adrfam": "IPv4", 00:18:09.509 "traddr": "10.0.0.2", 00:18:09.509 "trsvcid": "4420" 00:18:09.509 }, 00:18:09.509 "peer_address": { 00:18:09.509 "trtype": "TCP", 00:18:09.509 "adrfam": "IPv4", 00:18:09.509 "traddr": "10.0.0.1", 00:18:09.509 "trsvcid": "45782" 00:18:09.509 }, 00:18:09.509 "auth": { 00:18:09.509 "state": "completed", 00:18:09.509 "digest": "sha512", 00:18:09.509 "dhgroup": "ffdhe6144" 00:18:09.509 } 00:18:09.509 } 00:18:09.509 ]' 00:18:09.509 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.509 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.509 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.771 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:09.771 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.771 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.771 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.771 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.771 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:18:09.771 15:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.713 15:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.284 00:18:11.284 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.284 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.284 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.284 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.284 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.284 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.284 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.545 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.545 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.545 { 00:18:11.545 "cntlid": 137, 00:18:11.545 "qid": 0, 00:18:11.545 "state": "enabled", 00:18:11.545 "thread": "nvmf_tgt_poll_group_000", 00:18:11.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:11.545 "listen_address": { 00:18:11.545 "trtype": "TCP", 00:18:11.545 "adrfam": "IPv4", 00:18:11.545 "traddr": "10.0.0.2", 00:18:11.545 "trsvcid": "4420" 00:18:11.545 }, 00:18:11.545 "peer_address": { 00:18:11.545 "trtype": "TCP", 00:18:11.545 "adrfam": "IPv4", 00:18:11.545 "traddr": "10.0.0.1", 00:18:11.545 "trsvcid": "45802" 00:18:11.545 }, 00:18:11.545 "auth": { 00:18:11.545 "state": "completed", 00:18:11.545 "digest": "sha512", 00:18:11.545 "dhgroup": "ffdhe8192" 00:18:11.545 } 00:18:11.545 } 00:18:11.545 ]' 00:18:11.545 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.545 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.545 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.545 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.545 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.545 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.545 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.545 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.805 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:18:11.805 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:18:12.376 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.376 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:12.376 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.376 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.376 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.376 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.376 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.376 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.637 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:12.637 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.637 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.637 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:12.637 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:12.637 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.637 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.637 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.637 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.637 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.637 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.637 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.637 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.208 00:18:13.208 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.208 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.208 15:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.208 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.208 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.208 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.208 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.208 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.208 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.208 { 00:18:13.208 "cntlid": 139, 00:18:13.208 "qid": 0, 00:18:13.208 "state": "enabled", 00:18:13.208 "thread": "nvmf_tgt_poll_group_000", 00:18:13.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:13.208 "listen_address": { 00:18:13.208 "trtype": "TCP", 00:18:13.208 "adrfam": "IPv4", 00:18:13.208 "traddr": "10.0.0.2", 00:18:13.208 "trsvcid": "4420" 00:18:13.208 }, 00:18:13.208 "peer_address": { 00:18:13.208 "trtype": "TCP", 00:18:13.208 "adrfam": "IPv4", 00:18:13.208 "traddr": "10.0.0.1", 00:18:13.208 "trsvcid": "45830" 00:18:13.208 }, 00:18:13.208 "auth": { 00:18:13.208 "state": "completed", 00:18:13.208 "digest": "sha512", 00:18:13.208 "dhgroup": "ffdhe8192" 00:18:13.208 } 00:18:13.208 } 00:18:13.208 ]' 00:18:13.208 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.208 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.208 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.470 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:13.470 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.470 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.470 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.470 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.470 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:18:13.470 15:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: --dhchap-ctrl-secret DHHC-1:02:YjhhNDJmOGMzMDNkOTE5MTkwNTY1NTg1YTM3ZGE0M2MwODUzMGJlN2IzODIyYTI5RF/lcA==: 00:18:14.413 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.413 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:14.413 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.413 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.413 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.413 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.413 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.413 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.413 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:14.413 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.413 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:14.413 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:14.413 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:14.413 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.413 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.413 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.413 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.413 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.413 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.413 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.413 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.984 00:18:14.984 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.984 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.984 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.244 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.244 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.244 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.244 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.244 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.244 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.244 { 00:18:15.244 "cntlid": 141, 00:18:15.244 "qid": 0, 00:18:15.244 "state": "enabled", 00:18:15.244 "thread": "nvmf_tgt_poll_group_000", 00:18:15.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:15.244 "listen_address": { 00:18:15.244 "trtype": "TCP", 00:18:15.244 "adrfam": "IPv4", 00:18:15.244 "traddr": "10.0.0.2", 00:18:15.244 "trsvcid": "4420" 00:18:15.244 }, 00:18:15.244 "peer_address": { 00:18:15.244 "trtype": "TCP", 00:18:15.244 "adrfam": "IPv4", 00:18:15.244 "traddr": "10.0.0.1", 00:18:15.244 "trsvcid": "45856" 00:18:15.244 }, 00:18:15.244 "auth": { 00:18:15.244 "state": "completed", 00:18:15.244 "digest": "sha512", 00:18:15.244 "dhgroup": "ffdhe8192" 00:18:15.244 } 00:18:15.244 } 00:18:15.244 ]' 00:18:15.244 15:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.244 15:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.244 15:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.244 15:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.245 15:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.245 15:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.245 15:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.245 15:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.505 15:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:18:15.506 15:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:01:ZmQzZWM5NmM2N2E5M2U5YTcxYmJhYmQ3NWM1ZjIwNmKOC4Bk: 00:18:16.077 15:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.077 15:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:16.077 15:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.077 15:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.077 15:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.077 15:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.077 15:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.077 15:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.338 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:16.338 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.338 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.338 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:16.338 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:16.338 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.338 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:16.338 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.338 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.338 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.338 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:16.338 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.338 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.909 00:18:16.909 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.909 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.909 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.909 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.909 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.909 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.909 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.909 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.909 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.909 { 00:18:16.909 "cntlid": 143, 00:18:16.909 "qid": 0, 00:18:16.909 "state": "enabled", 00:18:16.909 "thread": "nvmf_tgt_poll_group_000", 00:18:16.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:16.909 "listen_address": { 00:18:16.909 "trtype": "TCP", 00:18:16.909 "adrfam": "IPv4", 00:18:16.909 "traddr": "10.0.0.2", 00:18:16.909 "trsvcid": "4420" 00:18:16.909 }, 00:18:16.909 "peer_address": { 00:18:16.909 "trtype": "TCP", 00:18:16.909 "adrfam": "IPv4", 00:18:16.909 "traddr": "10.0.0.1", 00:18:16.909 "trsvcid": "37260" 00:18:16.909 }, 00:18:16.909 "auth": { 00:18:16.909 "state": "completed", 00:18:16.909 "digest": "sha512", 00:18:16.909 "dhgroup": "ffdhe8192" 00:18:16.909 } 00:18:16.909 } 00:18:16.909 ]' 00:18:16.909 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.909 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.909 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.170 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.170 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.170 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.170 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.170 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.170 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:18:17.170 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:18:18.112 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.112 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:18.112 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.112 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.112 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.112 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:18.112 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:18.112 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:18.112 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:18.112 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:18.112 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:18.112 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:18.112 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.112 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.112 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:18.112 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:18.112 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.112 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.112 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.112 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.113 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.113 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.113 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.113 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.684 00:18:18.684 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.684 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.684 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.944 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.944 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.944 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.944 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.944 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.944 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.944 { 00:18:18.944 "cntlid": 145, 00:18:18.944 "qid": 0, 00:18:18.944 "state": "enabled", 00:18:18.944 "thread": "nvmf_tgt_poll_group_000", 00:18:18.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:18.944 "listen_address": { 00:18:18.944 "trtype": "TCP", 00:18:18.944 "adrfam": "IPv4", 00:18:18.944 "traddr": "10.0.0.2", 00:18:18.944 "trsvcid": "4420" 00:18:18.944 }, 00:18:18.944 "peer_address": { 00:18:18.944 "trtype": "TCP", 00:18:18.944 "adrfam": "IPv4", 00:18:18.944 "traddr": "10.0.0.1", 00:18:18.944 "trsvcid": "37306" 00:18:18.944 }, 00:18:18.944 "auth": { 00:18:18.944 "state": "completed", 00:18:18.944 "digest": "sha512", 00:18:18.944 "dhgroup": "ffdhe8192" 00:18:18.944 } 00:18:18.944 } 00:18:18.944 ]' 00:18:18.944 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.944 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.944 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.944 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.945 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.945 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.945 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.945 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.205 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:18:19.205 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZGY2MWQxMjRiYmM5NjhlMmE2ZmUwMmRhYmI0NDBiYzcyYTVhZjJlM2Q3Zjk0OTg4TB31Mg==: --dhchap-ctrl-secret DHHC-1:03:ZmU3YTQ0NTg1ZjZlMTAyNTQxN2M2NWYyZDZmMzUxZWNkMzc3MTI1MjFlOTU3Mjc3MzI3ZjVkYzc4NGZjZTIyONGtKNY=: 00:18:19.777 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.777 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:19.777 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.777 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.777 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.777 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:18:19.777 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.777 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.777 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.777 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:19.777 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:19.777 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:19.777 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:19.777 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.777 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:19.777 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.777 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:19.777 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:19.777 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:20.348 request: 00:18:20.348 { 00:18:20.348 "name": "nvme0", 00:18:20.348 "trtype": "tcp", 00:18:20.348 "traddr": "10.0.0.2", 00:18:20.348 "adrfam": "ipv4", 00:18:20.348 "trsvcid": "4420", 00:18:20.348 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:20.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:20.348 "prchk_reftag": false, 00:18:20.348 "prchk_guard": false, 00:18:20.348 "hdgst": false, 00:18:20.348 "ddgst": false, 00:18:20.348 "dhchap_key": "key2", 00:18:20.348 "allow_unrecognized_csi": false, 00:18:20.349 "method": "bdev_nvme_attach_controller", 00:18:20.349 "req_id": 1 00:18:20.349 } 00:18:20.349 Got JSON-RPC error response 00:18:20.349 response: 00:18:20.349 { 00:18:20.349 "code": -5, 00:18:20.349 "message": "Input/output error" 00:18:20.349 } 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:20.349 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:20.610 request: 00:18:20.610 { 00:18:20.610 "name": "nvme0", 00:18:20.610 "trtype": "tcp", 00:18:20.610 "traddr": "10.0.0.2", 00:18:20.610 "adrfam": "ipv4", 00:18:20.610 "trsvcid": "4420", 00:18:20.610 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:20.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:20.610 "prchk_reftag": false, 00:18:20.610 "prchk_guard": false, 00:18:20.610 "hdgst": false, 00:18:20.610 "ddgst": false, 00:18:20.610 "dhchap_key": "key1", 00:18:20.610 "dhchap_ctrlr_key": "ckey2", 00:18:20.610 "allow_unrecognized_csi": false, 00:18:20.610 "method": "bdev_nvme_attach_controller", 00:18:20.610 "req_id": 1 00:18:20.610 } 00:18:20.610 Got JSON-RPC error response 00:18:20.610 response: 00:18:20.610 { 00:18:20.610 "code": -5, 00:18:20.610 "message": "Input/output error" 00:18:20.610 } 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.871 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.132 request: 00:18:21.132 { 00:18:21.132 "name": "nvme0", 00:18:21.132 "trtype": "tcp", 00:18:21.132 "traddr": "10.0.0.2", 00:18:21.132 "adrfam": "ipv4", 00:18:21.132 "trsvcid": "4420", 00:18:21.132 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:21.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:21.132 "prchk_reftag": false, 00:18:21.132 "prchk_guard": false, 00:18:21.132 "hdgst": false, 00:18:21.132 "ddgst": false, 00:18:21.132 "dhchap_key": "key1", 00:18:21.132 "dhchap_ctrlr_key": "ckey1", 00:18:21.132 "allow_unrecognized_csi": false, 00:18:21.132 "method": "bdev_nvme_attach_controller", 00:18:21.132 "req_id": 1 00:18:21.132 } 00:18:21.132 Got JSON-RPC error response 00:18:21.132 response: 00:18:21.132 { 00:18:21.132 "code": -5, 00:18:21.132 "message": "Input/output error" 00:18:21.132 } 00:18:21.132 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:21.132 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:21.132 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:21.132 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:21.132 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:21.132 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.132 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.132 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.132 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3747379 00:18:21.132 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3747379 ']' 00:18:21.132 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3747379 00:18:21.132 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:21.132 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:21.132 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3747379 00:18:21.393 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:21.393 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:21.393 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3747379' 00:18:21.393 killing process with pid 3747379 00:18:21.393 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3747379 00:18:21.393 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3747379 00:18:21.393 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:21.393 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:21.393 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:21.393 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.393 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3774026 00:18:21.393 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3774026 00:18:21.393 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:21.393 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3774026 ']' 00:18:21.393 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.393 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:21.393 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.393 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:21.394 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.389 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:22.389 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:22.389 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:22.389 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:22.389 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.389 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.389 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:22.389 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3774026 00:18:22.389 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3774026 ']' 00:18:22.389 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.389 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:22.389 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.389 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:22.389 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.389 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:22.389 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:22.389 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:22.389 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.389 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.651 null0 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vpR 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.g9Q ]] 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.g9Q 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.c3I 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.mkg ]] 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mkg 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.LjT 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.kOA ]] 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kOA 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.SHS 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.651 15:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.593 nvme0n1 00:18:23.593 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.593 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.593 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.593 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.593 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.593 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.593 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.593 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.593 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.593 { 00:18:23.593 "cntlid": 1, 00:18:23.593 "qid": 0, 00:18:23.593 "state": "enabled", 00:18:23.593 "thread": "nvmf_tgt_poll_group_000", 00:18:23.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:23.593 "listen_address": { 00:18:23.593 "trtype": "TCP", 00:18:23.593 "adrfam": "IPv4", 00:18:23.593 "traddr": "10.0.0.2", 00:18:23.593 "trsvcid": "4420" 00:18:23.593 }, 00:18:23.593 "peer_address": { 00:18:23.593 "trtype": "TCP", 00:18:23.593 "adrfam": "IPv4", 00:18:23.593 "traddr": "10.0.0.1", 00:18:23.593 "trsvcid": "37372" 00:18:23.593 }, 00:18:23.593 "auth": { 00:18:23.593 "state": "completed", 00:18:23.593 "digest": "sha512", 00:18:23.593 "dhgroup": "ffdhe8192" 00:18:23.593 } 00:18:23.593 } 00:18:23.593 ]' 00:18:23.593 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.593 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.593 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.853 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:23.853 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.853 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.853 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.854 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.854 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:18:23.854 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:18:24.799 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.799 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:24.799 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.799 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.799 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.799 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:24.799 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.799 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.799 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.799 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:24.799 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:24.799 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:24.799 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:24.799 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:24.799 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:24.799 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.799 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:24.799 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.799 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:24.799 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.800 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.059 request: 00:18:25.059 { 00:18:25.059 "name": "nvme0", 00:18:25.059 "trtype": "tcp", 00:18:25.059 "traddr": "10.0.0.2", 00:18:25.059 "adrfam": "ipv4", 00:18:25.059 "trsvcid": "4420", 00:18:25.059 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:25.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:25.059 "prchk_reftag": false, 00:18:25.059 "prchk_guard": false, 00:18:25.059 "hdgst": false, 00:18:25.059 "ddgst": false, 00:18:25.059 "dhchap_key": "key3", 00:18:25.059 "allow_unrecognized_csi": false, 00:18:25.059 "method": "bdev_nvme_attach_controller", 00:18:25.059 "req_id": 1 00:18:25.059 } 00:18:25.059 Got JSON-RPC error response 00:18:25.059 response: 00:18:25.059 { 00:18:25.059 "code": -5, 00:18:25.059 "message": "Input/output error" 00:18:25.059 } 00:18:25.059 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:25.059 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:25.059 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:25.059 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:25.059 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:25.059 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:25.059 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:25.059 15:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:25.059 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:25.059 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:25.059 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:25.320 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:25.320 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.320 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:25.320 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.320 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:25.320 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.320 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.320 request: 00:18:25.320 { 00:18:25.320 "name": "nvme0", 00:18:25.320 "trtype": "tcp", 00:18:25.320 "traddr": "10.0.0.2", 00:18:25.320 "adrfam": "ipv4", 00:18:25.320 "trsvcid": "4420", 00:18:25.320 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:25.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:25.320 "prchk_reftag": false, 00:18:25.320 "prchk_guard": false, 00:18:25.320 "hdgst": false, 00:18:25.320 "ddgst": false, 00:18:25.320 "dhchap_key": "key3", 00:18:25.320 "allow_unrecognized_csi": false, 00:18:25.320 "method": "bdev_nvme_attach_controller", 00:18:25.320 "req_id": 1 00:18:25.320 } 00:18:25.320 Got JSON-RPC error response 00:18:25.320 response: 00:18:25.320 { 00:18:25.320 "code": -5, 00:18:25.320 "message": "Input/output error" 00:18:25.320 } 00:18:25.320 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:25.320 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:25.320 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:25.320 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:25.320 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:25.320 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:25.320 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:25.320 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:25.321 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:25.321 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:25.581 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:25.581 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.581 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.581 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.581 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:25.581 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.581 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.581 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.581 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:25.582 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:25.582 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:25.582 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:25.582 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.582 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:25.582 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.582 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:25.582 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:25.582 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:25.882 request: 00:18:25.882 { 00:18:25.882 "name": "nvme0", 00:18:25.882 "trtype": "tcp", 00:18:25.882 "traddr": "10.0.0.2", 00:18:25.882 "adrfam": "ipv4", 00:18:25.882 "trsvcid": "4420", 00:18:25.882 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:25.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:25.882 "prchk_reftag": false, 00:18:25.882 "prchk_guard": false, 00:18:25.882 "hdgst": false, 00:18:25.882 "ddgst": false, 00:18:25.882 "dhchap_key": "key0", 00:18:25.882 "dhchap_ctrlr_key": "key1", 00:18:25.882 "allow_unrecognized_csi": false, 00:18:25.882 "method": "bdev_nvme_attach_controller", 00:18:25.882 "req_id": 1 00:18:25.882 } 00:18:25.882 Got JSON-RPC error response 00:18:25.882 response: 00:18:25.882 { 00:18:25.882 "code": -5, 00:18:25.882 "message": "Input/output error" 00:18:25.882 } 00:18:25.882 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:25.882 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:25.882 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:25.882 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:25.882 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:25.882 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:25.882 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:26.189 nvme0n1 00:18:26.189 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:26.189 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:26.189 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.449 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.449 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.449 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.449 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:18:26.449 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.449 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.449 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.449 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:26.449 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:26.449 15:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:27.390 nvme0n1 00:18:27.390 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:27.390 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.390 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:27.390 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.390 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:27.390 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.390 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.390 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.390 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:27.390 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:27.390 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.660 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.660 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:18:27.660 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: --dhchap-ctrl-secret DHHC-1:03:NGEwZDFjYmZmZGMwZjA3NDIyMDU3YzI5ZGJjNGEzNGYwYzFhOTE0NzZmOTAxYzQ3YzIzZTNiNjBjMzkyMzg5Ns0EN5E=: 00:18:28.239 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:28.239 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:28.239 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:28.239 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:28.239 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:28.240 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:28.240 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:28.240 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.240 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.500 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:28.500 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:28.500 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:28.500 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:28.500 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.500 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:28.500 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.500 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:28.500 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:28.500 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:28.761 request: 00:18:28.761 { 00:18:28.761 "name": "nvme0", 00:18:28.761 "trtype": "tcp", 00:18:28.761 "traddr": "10.0.0.2", 00:18:28.761 "adrfam": "ipv4", 00:18:28.761 "trsvcid": "4420", 00:18:28.761 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:28.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:28.761 "prchk_reftag": false, 00:18:28.761 "prchk_guard": false, 00:18:28.761 "hdgst": false, 00:18:28.761 "ddgst": false, 00:18:28.761 "dhchap_key": "key1", 00:18:28.761 "allow_unrecognized_csi": false, 00:18:28.761 "method": "bdev_nvme_attach_controller", 00:18:28.761 "req_id": 1 00:18:28.761 } 00:18:28.761 Got JSON-RPC error response 00:18:28.761 response: 00:18:28.761 { 00:18:28.761 "code": -5, 00:18:28.761 "message": "Input/output error" 00:18:28.761 } 00:18:29.022 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:29.022 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:29.022 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:29.022 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:29.022 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:29.022 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:29.022 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:29.592 nvme0n1 00:18:29.592 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:29.592 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:29.592 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.852 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.852 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.852 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.113 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:30.113 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.113 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.113 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.113 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:30.113 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:30.113 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:30.373 nvme0n1 00:18:30.373 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:30.373 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:30.373 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.373 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.373 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.373 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.634 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:30.634 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.634 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.634 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.634 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: '' 2s 00:18:30.634 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:30.634 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:30.634 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: 00:18:30.634 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:30.634 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:30.634 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:30.634 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: ]] 00:18:30.634 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OWVmNzhlODhhODkyYmQxMDA0NDcxODU2ZjEyZmE2MjYxSYIe: 00:18:30.634 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:30.634 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:30.634 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: 2s 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: ]] 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Y2ViYTE3NzRlNjIzYTljYzk4ODc2YWQ3NWQ0OWUzZDQxMDQ3MzZmYTIyODJhY2Q1IJ4kmA==: 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:33.179 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:35.093 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:35.093 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:35.093 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:35.093 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:35.093 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:35.093 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:35.093 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:35.093 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.093 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:35.093 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.093 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.093 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.093 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:35.093 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:35.093 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:35.664 nvme0n1 00:18:35.664 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:35.664 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.664 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.664 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.664 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:35.664 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:35.924 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:35.924 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:35.924 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.185 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.185 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:36.185 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.185 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.185 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.185 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:36.185 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:36.445 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:36.445 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:36.445 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.445 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.445 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:36.445 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.445 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.445 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.445 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:36.445 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:36.445 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:36.445 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:36.445 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.445 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:36.445 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.445 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:36.445 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:37.016 request: 00:18:37.016 { 00:18:37.016 "name": "nvme0", 00:18:37.016 "dhchap_key": "key1", 00:18:37.016 "dhchap_ctrlr_key": "key3", 00:18:37.016 "method": "bdev_nvme_set_keys", 00:18:37.016 "req_id": 1 00:18:37.016 } 00:18:37.016 Got JSON-RPC error response 00:18:37.016 response: 00:18:37.016 { 00:18:37.016 "code": -13, 00:18:37.016 "message": "Permission denied" 00:18:37.016 } 00:18:37.016 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:37.016 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:37.016 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:37.016 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:37.016 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:37.016 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:37.016 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.276 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:37.276 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:38.217 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:38.217 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:38.217 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.478 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:38.478 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:38.478 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.478 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.478 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.478 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:38.478 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:38.478 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:39.047 nvme0n1 00:18:39.048 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:39.048 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.048 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.048 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.048 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:39.048 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:39.048 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:39.048 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:39.048 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.048 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:39.048 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.048 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:39.048 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:39.617 request: 00:18:39.617 { 00:18:39.617 "name": "nvme0", 00:18:39.617 "dhchap_key": "key2", 00:18:39.617 "dhchap_ctrlr_key": "key0", 00:18:39.617 "method": "bdev_nvme_set_keys", 00:18:39.617 "req_id": 1 00:18:39.617 } 00:18:39.617 Got JSON-RPC error response 00:18:39.617 response: 00:18:39.617 { 00:18:39.617 "code": -13, 00:18:39.617 "message": "Permission denied" 00:18:39.617 } 00:18:39.617 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:39.617 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:39.617 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:39.617 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:39.617 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:39.617 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:39.617 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.878 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:39.878 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:40.820 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:40.820 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:40.820 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.080 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:41.080 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:41.080 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:41.080 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3747717 00:18:41.080 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3747717 ']' 00:18:41.080 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3747717 00:18:41.080 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:41.080 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:41.081 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3747717 00:18:41.081 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:41.081 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:41.081 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3747717' 00:18:41.081 killing process with pid 3747717 00:18:41.081 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3747717 00:18:41.081 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3747717 00:18:41.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:41.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:41.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:41.341 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:41.342 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:41.342 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:41.342 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:41.342 rmmod nvme_tcp 00:18:41.342 rmmod nvme_fabrics 00:18:41.342 rmmod nvme_keyring 00:18:41.342 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:41.342 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:41.342 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:41.342 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3774026 ']' 00:18:41.342 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3774026 00:18:41.342 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3774026 ']' 00:18:41.342 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3774026 00:18:41.342 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:41.342 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:41.342 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3774026 00:18:41.342 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:41.342 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:41.342 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3774026' 00:18:41.342 killing process with pid 3774026 00:18:41.342 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3774026 00:18:41.342 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3774026 00:18:41.603 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:41.603 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:41.603 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:41.603 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:41.603 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:41.603 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:41.603 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:41.603 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:41.603 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:41.603 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.603 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.603 15:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.516 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:43.516 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.vpR /tmp/spdk.key-sha256.c3I /tmp/spdk.key-sha384.LjT /tmp/spdk.key-sha512.SHS /tmp/spdk.key-sha512.g9Q /tmp/spdk.key-sha384.mkg /tmp/spdk.key-sha256.kOA '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:43.516 00:18:43.516 real 2m37.663s 00:18:43.516 user 5m54.580s 00:18:43.516 sys 0m24.791s 00:18:43.516 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:43.516 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.516 ************************************ 00:18:43.516 END TEST nvmf_auth_target 00:18:43.516 ************************************ 00:18:43.516 15:31:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:43.516 15:31:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:43.516 15:31:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:43.516 15:31:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:43.516 15:31:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:43.778 ************************************ 00:18:43.778 START TEST nvmf_bdevio_no_huge 00:18:43.778 ************************************ 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:43.778 * Looking for test storage... 00:18:43.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:43.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.778 --rc genhtml_branch_coverage=1 00:18:43.778 --rc genhtml_function_coverage=1 00:18:43.778 --rc genhtml_legend=1 00:18:43.778 --rc geninfo_all_blocks=1 00:18:43.778 --rc geninfo_unexecuted_blocks=1 00:18:43.778 00:18:43.778 ' 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:43.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.778 --rc genhtml_branch_coverage=1 00:18:43.778 --rc genhtml_function_coverage=1 00:18:43.778 --rc genhtml_legend=1 00:18:43.778 --rc geninfo_all_blocks=1 00:18:43.778 --rc geninfo_unexecuted_blocks=1 00:18:43.778 00:18:43.778 ' 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:43.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.778 --rc genhtml_branch_coverage=1 00:18:43.778 --rc genhtml_function_coverage=1 00:18:43.778 --rc genhtml_legend=1 00:18:43.778 --rc geninfo_all_blocks=1 00:18:43.778 --rc geninfo_unexecuted_blocks=1 00:18:43.778 00:18:43.778 ' 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:43.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.778 --rc genhtml_branch_coverage=1 00:18:43.778 --rc genhtml_function_coverage=1 00:18:43.778 --rc genhtml_legend=1 00:18:43.778 --rc geninfo_all_blocks=1 00:18:43.778 --rc geninfo_unexecuted_blocks=1 00:18:43.778 00:18:43.778 ' 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.778 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:43.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.779 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.040 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:44.040 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:44.040 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:44.040 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:52.186 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:52.186 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:52.186 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:52.187 Found net devices under 0000:31:00.0: cvl_0_0 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:52.187 Found net devices under 0000:31:00.1: cvl_0_1 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:52.187 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:52.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:18:52.187 00:18:52.187 --- 10.0.0.2 ping statistics --- 00:18:52.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.187 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:52.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:18:52.187 00:18:52.187 --- 10.0.0.1 ping statistics --- 00:18:52.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.187 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3782217 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3782217 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 3782217 ']' 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:52.187 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.187 [2024-11-06 15:31:09.431903] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:18:52.187 [2024-11-06 15:31:09.431973] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:52.187 [2024-11-06 15:31:09.539402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:52.187 [2024-11-06 15:31:09.598378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.187 [2024-11-06 15:31:09.598422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.187 [2024-11-06 15:31:09.598430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.187 [2024-11-06 15:31:09.598439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.187 [2024-11-06 15:31:09.598446] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.187 [2024-11-06 15:31:09.600035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:52.187 [2024-11-06 15:31:09.600196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:52.187 [2024-11-06 15:31:09.600353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:52.187 [2024-11-06 15:31:09.600353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.449 [2024-11-06 15:31:10.313108] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.449 Malloc0 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.449 [2024-11-06 15:31:10.367126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:52.449 { 00:18:52.449 "params": { 00:18:52.449 "name": "Nvme$subsystem", 00:18:52.449 "trtype": "$TEST_TRANSPORT", 00:18:52.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.449 "adrfam": "ipv4", 00:18:52.449 "trsvcid": "$NVMF_PORT", 00:18:52.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.449 "hdgst": ${hdgst:-false}, 00:18:52.449 "ddgst": ${ddgst:-false} 00:18:52.449 }, 00:18:52.449 "method": "bdev_nvme_attach_controller" 00:18:52.449 } 00:18:52.449 EOF 00:18:52.449 )") 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:52.449 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:52.449 "params": { 00:18:52.449 "name": "Nvme1", 00:18:52.449 "trtype": "tcp", 00:18:52.449 "traddr": "10.0.0.2", 00:18:52.449 "adrfam": "ipv4", 00:18:52.449 "trsvcid": "4420", 00:18:52.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.449 "hdgst": false, 00:18:52.449 "ddgst": false 00:18:52.449 }, 00:18:52.449 "method": "bdev_nvme_attach_controller" 00:18:52.449 }' 00:18:52.449 [2024-11-06 15:31:10.424263] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:18:52.450 [2024-11-06 15:31:10.424335] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3782338 ] 00:18:52.711 [2024-11-06 15:31:10.524829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:52.711 [2024-11-06 15:31:10.585804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.711 [2024-11-06 15:31:10.585901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.711 [2024-11-06 15:31:10.585903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.972 I/O targets: 00:18:52.972 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:52.972 00:18:52.972 00:18:52.972 CUnit - A unit testing framework for C - Version 2.1-3 00:18:52.972 http://cunit.sourceforge.net/ 00:18:52.972 00:18:52.972 00:18:52.972 Suite: bdevio tests on: Nvme1n1 00:18:52.972 Test: blockdev write read block ...passed 00:18:52.972 Test: blockdev write zeroes read block ...passed 00:18:52.972 Test: blockdev write zeroes read no split ...passed 00:18:52.972 Test: blockdev write zeroes read split ...passed 00:18:52.972 Test: blockdev write zeroes read split partial ...passed 00:18:52.972 Test: blockdev reset ...[2024-11-06 15:31:10.911658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:52.972 [2024-11-06 15:31:10.911767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5a400 (9): Bad file descriptor 00:18:53.233 [2024-11-06 15:31:11.057075] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:53.233 passed 00:18:53.233 Test: blockdev write read 8 blocks ...passed 00:18:53.233 Test: blockdev write read size > 128k ...passed 00:18:53.233 Test: blockdev write read invalid size ...passed 00:18:53.233 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:53.233 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:53.233 Test: blockdev write read max offset ...passed 00:18:53.233 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:53.233 Test: blockdev writev readv 8 blocks ...passed 00:18:53.233 Test: blockdev writev readv 30 x 1block ...passed 00:18:53.494 Test: blockdev writev readv block ...passed 00:18:53.494 Test: blockdev writev readv size > 128k ...passed 00:18:53.494 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:53.494 Test: blockdev comparev and writev ...[2024-11-06 15:31:11.240986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.494 [2024-11-06 15:31:11.241035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:53.494 [2024-11-06 15:31:11.241052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.494 [2024-11-06 15:31:11.241068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:53.494 [2024-11-06 15:31:11.241647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.494 [2024-11-06 15:31:11.241661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:53.494 [2024-11-06 15:31:11.241675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.494 [2024-11-06 15:31:11.241683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:53.494 [2024-11-06 15:31:11.242265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.494 [2024-11-06 15:31:11.242279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:53.494 [2024-11-06 15:31:11.242294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.494 [2024-11-06 15:31:11.242303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:53.494 [2024-11-06 15:31:11.242843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.494 [2024-11-06 15:31:11.242858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:53.494 [2024-11-06 15:31:11.242872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.494 [2024-11-06 15:31:11.242880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:53.494 passed 00:18:53.494 Test: blockdev nvme passthru rw ...passed 00:18:53.494 Test: blockdev nvme passthru vendor specific ...[2024-11-06 15:31:11.327589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:53.494 [2024-11-06 15:31:11.327608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:53.494 [2024-11-06 15:31:11.327974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:53.494 [2024-11-06 15:31:11.327990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:53.494 [2024-11-06 15:31:11.328339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:53.494 [2024-11-06 15:31:11.328352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:53.494 [2024-11-06 15:31:11.328756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:53.494 [2024-11-06 15:31:11.328769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:53.494 passed 00:18:53.494 Test: blockdev nvme admin passthru ...passed 00:18:53.494 Test: blockdev copy ...passed 00:18:53.494 00:18:53.494 Run Summary: Type Total Ran Passed Failed Inactive 00:18:53.494 suites 1 1 n/a 0 0 00:18:53.494 tests 23 23 23 0 0 00:18:53.494 asserts 152 152 152 0 n/a 00:18:53.494 00:18:53.494 Elapsed time = 1.242 seconds 00:18:53.755 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:53.755 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.755 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.755 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.755 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:53.755 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:53.755 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:53.755 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:53.755 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:53.755 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:53.755 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:53.755 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:53.755 rmmod nvme_tcp 00:18:53.755 rmmod nvme_fabrics 00:18:53.755 rmmod nvme_keyring 00:18:53.755 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:54.016 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:54.016 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:54.016 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3782217 ']' 00:18:54.016 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3782217 00:18:54.016 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 3782217 ']' 00:18:54.016 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 3782217 00:18:54.016 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:18:54.016 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:54.016 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3782217 00:18:54.016 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:18:54.016 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:18:54.016 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3782217' 00:18:54.016 killing process with pid 3782217 00:18:54.016 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 3782217 00:18:54.016 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 3782217 00:18:54.276 15:31:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:54.277 15:31:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:54.277 15:31:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:54.277 15:31:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:54.277 15:31:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:54.277 15:31:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:54.277 15:31:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:54.277 15:31:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:54.277 15:31:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:54.277 15:31:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.277 15:31:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:54.277 15:31:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:56.190 00:18:56.190 real 0m12.596s 00:18:56.190 user 0m13.879s 00:18:56.190 sys 0m6.835s 00:18:56.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:56.190 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:56.190 ************************************ 00:18:56.190 END TEST nvmf_bdevio_no_huge 00:18:56.190 ************************************ 00:18:56.190 15:31:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:56.190 15:31:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:56.190 15:31:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:56.190 15:31:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:56.546 ************************************ 00:18:56.546 START TEST nvmf_tls 00:18:56.546 ************************************ 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:56.546 * Looking for test storage... 00:18:56.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:56.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.546 --rc genhtml_branch_coverage=1 00:18:56.546 --rc genhtml_function_coverage=1 00:18:56.546 --rc genhtml_legend=1 00:18:56.546 --rc geninfo_all_blocks=1 00:18:56.546 --rc geninfo_unexecuted_blocks=1 00:18:56.546 00:18:56.546 ' 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:56.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.546 --rc genhtml_branch_coverage=1 00:18:56.546 --rc genhtml_function_coverage=1 00:18:56.546 --rc genhtml_legend=1 00:18:56.546 --rc geninfo_all_blocks=1 00:18:56.546 --rc geninfo_unexecuted_blocks=1 00:18:56.546 00:18:56.546 ' 00:18:56.546 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:56.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.546 --rc genhtml_branch_coverage=1 00:18:56.546 --rc genhtml_function_coverage=1 00:18:56.547 --rc genhtml_legend=1 00:18:56.547 --rc geninfo_all_blocks=1 00:18:56.547 --rc geninfo_unexecuted_blocks=1 00:18:56.547 00:18:56.547 ' 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:56.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.547 --rc genhtml_branch_coverage=1 00:18:56.547 --rc genhtml_function_coverage=1 00:18:56.547 --rc genhtml_legend=1 00:18:56.547 --rc geninfo_all_blocks=1 00:18:56.547 --rc geninfo_unexecuted_blocks=1 00:18:56.547 00:18:56.547 ' 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:56.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:56.547 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:04.741 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:04.741 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:04.741 Found net devices under 0000:31:00.0: cvl_0_0 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:04.741 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:04.742 Found net devices under 0000:31:00.1: cvl_0_1 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:04.742 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:04.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:04.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:19:04.742 00:19:04.742 --- 10.0.0.2 ping statistics --- 00:19:04.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.742 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:04.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:04.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:19:04.742 00:19:04.742 --- 10.0.0.1 ping statistics --- 00:19:04.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.742 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3786951 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3786951 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3786951 ']' 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:04.742 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.742 [2024-11-06 15:31:22.216094] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:19:04.742 [2024-11-06 15:31:22.216162] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.742 [2024-11-06 15:31:22.318510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.742 [2024-11-06 15:31:22.368765] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.742 [2024-11-06 15:31:22.368820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.742 [2024-11-06 15:31:22.368829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.742 [2024-11-06 15:31:22.368836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.742 [2024-11-06 15:31:22.368843] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.742 [2024-11-06 15:31:22.369627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.314 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:05.314 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:05.314 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:05.314 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:05.314 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.314 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.314 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:05.314 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:05.314 true 00:19:05.314 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:05.314 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:05.575 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:05.575 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:05.575 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:05.836 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:05.836 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:06.097 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:06.097 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:06.097 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:06.097 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:06.097 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:06.358 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:06.358 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:06.358 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:06.358 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:06.619 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:06.619 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:06.619 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:06.619 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:06.619 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:06.880 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:06.880 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:06.880 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:07.140 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:07.140 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.rrviJjvckA 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.JZlLRYCxVz 00:19:07.401 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:07.402 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:07.402 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.rrviJjvckA 00:19:07.402 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.JZlLRYCxVz 00:19:07.402 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:07.662 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:07.923 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.rrviJjvckA 00:19:07.923 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rrviJjvckA 00:19:07.923 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:07.923 [2024-11-06 15:31:25.856781] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.923 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:08.183 15:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:08.444 [2024-11-06 15:31:26.189589] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:08.444 [2024-11-06 15:31:26.189802] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.444 15:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:08.444 malloc0 00:19:08.444 15:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:08.705 15:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rrviJjvckA 00:19:08.965 15:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:08.965 15:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.rrviJjvckA 00:19:21.198 Initializing NVMe Controllers 00:19:21.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:21.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:21.198 Initialization complete. Launching workers. 00:19:21.198 ======================================================== 00:19:21.198 Latency(us) 00:19:21.198 Device Information : IOPS MiB/s Average min max 00:19:21.198 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18761.56 73.29 3411.40 978.27 4157.44 00:19:21.198 ======================================================== 00:19:21.198 Total : 18761.56 73.29 3411.40 978.27 4157.44 00:19:21.198 00:19:21.198 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rrviJjvckA 00:19:21.198 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:21.198 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:21.198 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:21.198 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rrviJjvckA 00:19:21.198 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:21.198 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3789792 00:19:21.198 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:21.198 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3789792 /var/tmp/bdevperf.sock 00:19:21.198 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:21.198 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3789792 ']' 00:19:21.198 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.198 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:21.198 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.198 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:21.198 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.198 [2024-11-06 15:31:37.061922] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:19:21.198 [2024-11-06 15:31:37.061978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3789792 ] 00:19:21.198 [2024-11-06 15:31:37.151888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.198 [2024-11-06 15:31:37.187327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.198 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:21.198 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:21.198 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rrviJjvckA 00:19:21.198 15:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:21.198 [2024-11-06 15:31:38.164073] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:21.198 TLSTESTn1 00:19:21.198 15:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:21.198 Running I/O for 10 seconds... 00:19:22.408 4932.00 IOPS, 19.27 MiB/s [2024-11-06T14:31:41.776Z] 4525.00 IOPS, 17.68 MiB/s [2024-11-06T14:31:42.718Z] 4562.33 IOPS, 17.82 MiB/s [2024-11-06T14:31:43.660Z] 4994.75 IOPS, 19.51 MiB/s [2024-11-06T14:31:44.604Z] 5049.00 IOPS, 19.72 MiB/s [2024-11-06T14:31:45.547Z] 5044.00 IOPS, 19.70 MiB/s [2024-11-06T14:31:46.490Z] 5211.57 IOPS, 20.36 MiB/s [2024-11-06T14:31:47.432Z] 5317.75 IOPS, 20.77 MiB/s [2024-11-06T14:31:48.376Z] 5351.44 IOPS, 20.90 MiB/s [2024-11-06T14:31:48.637Z] 5407.20 IOPS, 21.12 MiB/s 00:19:30.654 Latency(us) 00:19:30.654 [2024-11-06T14:31:48.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.654 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:30.654 Verification LBA range: start 0x0 length 0x2000 00:19:30.654 TLSTESTn1 : 10.02 5410.29 21.13 0.00 0.00 23621.39 5652.48 76895.57 00:19:30.654 [2024-11-06T14:31:48.637Z] =================================================================================================================== 00:19:30.654 [2024-11-06T14:31:48.637Z] Total : 5410.29 21.13 0.00 0.00 23621.39 5652.48 76895.57 00:19:30.654 { 00:19:30.654 "results": [ 00:19:30.654 { 00:19:30.654 "job": "TLSTESTn1", 00:19:30.654 "core_mask": "0x4", 00:19:30.654 "workload": "verify", 00:19:30.654 "status": "finished", 00:19:30.654 "verify_range": { 00:19:30.654 "start": 0, 00:19:30.654 "length": 8192 00:19:30.654 }, 00:19:30.654 "queue_depth": 128, 00:19:30.654 "io_size": 4096, 00:19:30.654 "runtime": 10.017754, 00:19:30.654 "iops": 5410.294563032791, 00:19:30.654 "mibps": 21.13396313684684, 00:19:30.654 "io_failed": 0, 00:19:30.654 "io_timeout": 0, 00:19:30.654 "avg_latency_us": 23621.391450026753, 00:19:30.654 "min_latency_us": 5652.48, 00:19:30.654 "max_latency_us": 76895.57333333333 00:19:30.654 } 00:19:30.654 ], 00:19:30.654 "core_count": 1 00:19:30.654 } 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3789792 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3789792 ']' 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3789792 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3789792 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3789792' 00:19:30.654 killing process with pid 3789792 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3789792 00:19:30.654 Received shutdown signal, test time was about 10.000000 seconds 00:19:30.654 00:19:30.654 Latency(us) 00:19:30.654 [2024-11-06T14:31:48.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.654 [2024-11-06T14:31:48.637Z] =================================================================================================================== 00:19:30.654 [2024-11-06T14:31:48.637Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3789792 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JZlLRYCxVz 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JZlLRYCxVz 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JZlLRYCxVz 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JZlLRYCxVz 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3792031 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3792031 /var/tmp/bdevperf.sock 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3792031 ']' 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:30.654 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.916 [2024-11-06 15:31:48.639163] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:19:30.916 [2024-11-06 15:31:48.639222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3792031 ] 00:19:30.916 [2024-11-06 15:31:48.724543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.916 [2024-11-06 15:31:48.753119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.488 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:31.488 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:31.488 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JZlLRYCxVz 00:19:31.748 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:31.748 [2024-11-06 15:31:49.712720] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:31.748 [2024-11-06 15:31:49.717143] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:31.748 [2024-11-06 15:31:49.717759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c64bc0 (107): Transport endpoint is not connected 00:19:31.748 [2024-11-06 15:31:49.718750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c64bc0 (9): Bad file descriptor 00:19:31.748 [2024-11-06 15:31:49.719752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:31.748 [2024-11-06 15:31:49.719758] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:31.748 [2024-11-06 15:31:49.719764] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:31.748 [2024-11-06 15:31:49.719772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:31.748 request: 00:19:31.748 { 00:19:31.748 "name": "TLSTEST", 00:19:31.749 "trtype": "tcp", 00:19:31.749 "traddr": "10.0.0.2", 00:19:31.749 "adrfam": "ipv4", 00:19:31.749 "trsvcid": "4420", 00:19:31.749 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.749 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.749 "prchk_reftag": false, 00:19:31.749 "prchk_guard": false, 00:19:31.749 "hdgst": false, 00:19:31.749 "ddgst": false, 00:19:31.749 "psk": "key0", 00:19:31.749 "allow_unrecognized_csi": false, 00:19:31.749 "method": "bdev_nvme_attach_controller", 00:19:31.749 "req_id": 1 00:19:31.749 } 00:19:31.749 Got JSON-RPC error response 00:19:31.749 response: 00:19:31.749 { 00:19:31.749 "code": -5, 00:19:31.749 "message": "Input/output error" 00:19:31.749 } 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3792031 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3792031 ']' 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3792031 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3792031 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3792031' 00:19:32.009 killing process with pid 3792031 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3792031 00:19:32.009 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.009 00:19:32.009 Latency(us) 00:19:32.009 [2024-11-06T14:31:49.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.009 [2024-11-06T14:31:49.992Z] =================================================================================================================== 00:19:32.009 [2024-11-06T14:31:49.992Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3792031 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rrviJjvckA 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rrviJjvckA 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rrviJjvckA 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rrviJjvckA 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3792376 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3792376 /var/tmp/bdevperf.sock 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3792376 ']' 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:32.009 15:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.009 [2024-11-06 15:31:49.953640] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:19:32.009 [2024-11-06 15:31:49.953696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3792376 ] 00:19:32.269 [2024-11-06 15:31:50.039112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.269 [2024-11-06 15:31:50.069809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.840 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:32.840 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:32.841 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rrviJjvckA 00:19:33.101 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:33.101 [2024-11-06 15:31:51.038546] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.101 [2024-11-06 15:31:51.047768] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:33.101 [2024-11-06 15:31:51.047791] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:33.101 [2024-11-06 15:31:51.047811] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:33.101 [2024-11-06 15:31:51.048688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef2bc0 (107): Transport endpoint is not connected 00:19:33.101 [2024-11-06 15:31:51.049684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef2bc0 (9): Bad file descriptor 00:19:33.101 [2024-11-06 15:31:51.050685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:33.101 [2024-11-06 15:31:51.050697] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:33.101 [2024-11-06 15:31:51.050703] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:33.101 [2024-11-06 15:31:51.050711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:33.101 request: 00:19:33.101 { 00:19:33.101 "name": "TLSTEST", 00:19:33.101 "trtype": "tcp", 00:19:33.101 "traddr": "10.0.0.2", 00:19:33.101 "adrfam": "ipv4", 00:19:33.101 "trsvcid": "4420", 00:19:33.101 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.101 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:33.101 "prchk_reftag": false, 00:19:33.101 "prchk_guard": false, 00:19:33.101 "hdgst": false, 00:19:33.101 "ddgst": false, 00:19:33.101 "psk": "key0", 00:19:33.101 "allow_unrecognized_csi": false, 00:19:33.101 "method": "bdev_nvme_attach_controller", 00:19:33.101 "req_id": 1 00:19:33.101 } 00:19:33.101 Got JSON-RPC error response 00:19:33.101 response: 00:19:33.101 { 00:19:33.101 "code": -5, 00:19:33.101 "message": "Input/output error" 00:19:33.101 } 00:19:33.101 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3792376 00:19:33.101 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3792376 ']' 00:19:33.101 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3792376 00:19:33.101 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:33.101 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:33.101 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3792376 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3792376' 00:19:33.362 killing process with pid 3792376 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3792376 00:19:33.362 Received shutdown signal, test time was about 10.000000 seconds 00:19:33.362 00:19:33.362 Latency(us) 00:19:33.362 [2024-11-06T14:31:51.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.362 [2024-11-06T14:31:51.345Z] =================================================================================================================== 00:19:33.362 [2024-11-06T14:31:51.345Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3792376 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rrviJjvckA 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rrviJjvckA 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rrviJjvckA 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rrviJjvckA 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3792653 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3792653 /var/tmp/bdevperf.sock 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3792653 ']' 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:33.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:33.362 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.362 [2024-11-06 15:31:51.280444] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:19:33.362 [2024-11-06 15:31:51.280498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3792653 ] 00:19:33.622 [2024-11-06 15:31:51.364524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.622 [2024-11-06 15:31:51.392537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.194 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:34.194 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:34.194 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rrviJjvckA 00:19:34.455 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:34.455 [2024-11-06 15:31:52.408295] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:34.455 [2024-11-06 15:31:52.412796] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:34.455 [2024-11-06 15:31:52.412816] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:34.455 [2024-11-06 15:31:52.412835] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:34.455 [2024-11-06 15:31:52.413483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xababc0 (107): Transport endpoint is not connected 00:19:34.455 [2024-11-06 15:31:52.414478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xababc0 (9): Bad file descriptor 00:19:34.455 [2024-11-06 15:31:52.415479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:34.455 [2024-11-06 15:31:52.415489] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:34.455 [2024-11-06 15:31:52.415494] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:34.455 [2024-11-06 15:31:52.415502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:34.455 request: 00:19:34.455 { 00:19:34.455 "name": "TLSTEST", 00:19:34.455 "trtype": "tcp", 00:19:34.455 "traddr": "10.0.0.2", 00:19:34.455 "adrfam": "ipv4", 00:19:34.455 "trsvcid": "4420", 00:19:34.455 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:34.455 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:34.455 "prchk_reftag": false, 00:19:34.455 "prchk_guard": false, 00:19:34.455 "hdgst": false, 00:19:34.455 "ddgst": false, 00:19:34.455 "psk": "key0", 00:19:34.455 "allow_unrecognized_csi": false, 00:19:34.455 "method": "bdev_nvme_attach_controller", 00:19:34.455 "req_id": 1 00:19:34.455 } 00:19:34.455 Got JSON-RPC error response 00:19:34.455 response: 00:19:34.455 { 00:19:34.455 "code": -5, 00:19:34.455 "message": "Input/output error" 00:19:34.455 } 00:19:34.716 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3792653 00:19:34.716 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3792653 ']' 00:19:34.716 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3792653 00:19:34.716 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:34.716 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:34.716 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3792653 00:19:34.716 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:34.716 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3792653' 00:19:34.717 killing process with pid 3792653 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3792653 00:19:34.717 Received shutdown signal, test time was about 10.000000 seconds 00:19:34.717 00:19:34.717 Latency(us) 00:19:34.717 [2024-11-06T14:31:52.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.717 [2024-11-06T14:31:52.700Z] =================================================================================================================== 00:19:34.717 [2024-11-06T14:31:52.700Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3792653 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3792832 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3792832 /var/tmp/bdevperf.sock 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3792832 ']' 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:34.717 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.717 [2024-11-06 15:31:52.661881] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:19:34.717 [2024-11-06 15:31:52.661938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3792832 ] 00:19:34.978 [2024-11-06 15:31:52.745677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.978 [2024-11-06 15:31:52.774211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.550 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:35.550 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:35.550 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:35.811 [2024-11-06 15:31:53.609389] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:35.811 [2024-11-06 15:31:53.609415] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:35.811 request: 00:19:35.811 { 00:19:35.811 "name": "key0", 00:19:35.811 "path": "", 00:19:35.811 "method": "keyring_file_add_key", 00:19:35.811 "req_id": 1 00:19:35.811 } 00:19:35.811 Got JSON-RPC error response 00:19:35.811 response: 00:19:35.811 { 00:19:35.811 "code": -1, 00:19:35.811 "message": "Operation not permitted" 00:19:35.811 } 00:19:35.811 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:36.072 [2024-11-06 15:31:53.793934] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:36.072 [2024-11-06 15:31:53.793954] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:36.072 request: 00:19:36.072 { 00:19:36.072 "name": "TLSTEST", 00:19:36.072 "trtype": "tcp", 00:19:36.072 "traddr": "10.0.0.2", 00:19:36.072 "adrfam": "ipv4", 00:19:36.072 "trsvcid": "4420", 00:19:36.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:36.072 "prchk_reftag": false, 00:19:36.072 "prchk_guard": false, 00:19:36.072 "hdgst": false, 00:19:36.072 "ddgst": false, 00:19:36.072 "psk": "key0", 00:19:36.072 "allow_unrecognized_csi": false, 00:19:36.072 "method": "bdev_nvme_attach_controller", 00:19:36.072 "req_id": 1 00:19:36.072 } 00:19:36.072 Got JSON-RPC error response 00:19:36.072 response: 00:19:36.072 { 00:19:36.072 "code": -126, 00:19:36.072 "message": "Required key not available" 00:19:36.072 } 00:19:36.072 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3792832 00:19:36.072 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3792832 ']' 00:19:36.072 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3792832 00:19:36.072 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:36.072 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:36.072 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3792832 00:19:36.072 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:36.072 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:36.072 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3792832' 00:19:36.072 killing process with pid 3792832 00:19:36.072 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3792832 00:19:36.072 Received shutdown signal, test time was about 10.000000 seconds 00:19:36.072 00:19:36.072 Latency(us) 00:19:36.072 [2024-11-06T14:31:54.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.072 [2024-11-06T14:31:54.055Z] =================================================================================================================== 00:19:36.072 [2024-11-06T14:31:54.055Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:36.072 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3792832 00:19:36.072 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:36.072 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:36.072 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:36.072 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:36.073 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:36.073 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3786951 00:19:36.073 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3786951 ']' 00:19:36.073 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3786951 00:19:36.073 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:36.073 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:36.073 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3786951 00:19:36.073 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:36.073 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:36.073 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3786951' 00:19:36.073 killing process with pid 3786951 00:19:36.073 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3786951 00:19:36.073 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3786951 00:19:36.333 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:36.333 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:36.333 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:36.333 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:36.333 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:36.333 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:36.333 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:36.333 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:36.333 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:36.333 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.TqfbSgrgHa 00:19:36.333 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:36.333 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.TqfbSgrgHa 00:19:36.333 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:36.333 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:36.333 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:36.333 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.333 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3793126 00:19:36.334 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3793126 00:19:36.334 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:36.334 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3793126 ']' 00:19:36.334 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.334 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:36.334 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.334 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:36.334 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.334 [2024-11-06 15:31:54.290121] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:19:36.334 [2024-11-06 15:31:54.290208] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.595 [2024-11-06 15:31:54.386054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.595 [2024-11-06 15:31:54.424058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.595 [2024-11-06 15:31:54.424096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.595 [2024-11-06 15:31:54.424102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.595 [2024-11-06 15:31:54.424107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.595 [2024-11-06 15:31:54.424112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.595 [2024-11-06 15:31:54.424682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.166 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:37.166 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:37.166 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:37.166 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:37.166 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.166 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.166 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.TqfbSgrgHa 00:19:37.166 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TqfbSgrgHa 00:19:37.166 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:37.426 [2024-11-06 15:31:55.277353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.427 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:37.687 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:37.687 [2024-11-06 15:31:55.638247] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:37.687 [2024-11-06 15:31:55.638438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.947 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:37.947 malloc0 00:19:37.947 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:38.208 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TqfbSgrgHa 00:19:38.469 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:38.469 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TqfbSgrgHa 00:19:38.469 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:38.469 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:38.469 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:38.469 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TqfbSgrgHa 00:19:38.469 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:38.469 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3793681 00:19:38.469 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:38.469 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3793681 /var/tmp/bdevperf.sock 00:19:38.469 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:38.469 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3793681 ']' 00:19:38.469 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.469 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:38.469 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.469 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:38.469 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.469 [2024-11-06 15:31:56.429399] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:19:38.469 [2024-11-06 15:31:56.429452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3793681 ] 00:19:38.730 [2024-11-06 15:31:56.514659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.730 [2024-11-06 15:31:56.543771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.301 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:39.301 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:39.301 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TqfbSgrgHa 00:19:39.562 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:39.562 [2024-11-06 15:31:57.535500] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.822 TLSTESTn1 00:19:39.822 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:39.822 Running I/O for 10 seconds... 00:19:42.149 6062.00 IOPS, 23.68 MiB/s [2024-11-06T14:32:01.074Z] 5886.50 IOPS, 22.99 MiB/s [2024-11-06T14:32:02.014Z] 5909.67 IOPS, 23.08 MiB/s [2024-11-06T14:32:02.955Z] 5926.75 IOPS, 23.15 MiB/s [2024-11-06T14:32:03.897Z] 5842.00 IOPS, 22.82 MiB/s [2024-11-06T14:32:04.840Z] 5849.33 IOPS, 22.85 MiB/s [2024-11-06T14:32:05.782Z] 5860.00 IOPS, 22.89 MiB/s [2024-11-06T14:32:07.168Z] 5887.00 IOPS, 23.00 MiB/s [2024-11-06T14:32:08.110Z] 5848.44 IOPS, 22.85 MiB/s [2024-11-06T14:32:08.110Z] 5830.00 IOPS, 22.77 MiB/s 00:19:50.127 Latency(us) 00:19:50.127 [2024-11-06T14:32:08.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.127 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:50.127 Verification LBA range: start 0x0 length 0x2000 00:19:50.127 TLSTESTn1 : 10.03 5828.09 22.77 0.00 0.00 21922.67 6198.61 23374.51 00:19:50.127 [2024-11-06T14:32:08.110Z] =================================================================================================================== 00:19:50.127 [2024-11-06T14:32:08.110Z] Total : 5828.09 22.77 0.00 0.00 21922.67 6198.61 23374.51 00:19:50.127 { 00:19:50.127 "results": [ 00:19:50.127 { 00:19:50.127 "job": "TLSTESTn1", 00:19:50.127 "core_mask": "0x4", 00:19:50.127 "workload": "verify", 00:19:50.127 "status": "finished", 00:19:50.127 "verify_range": { 00:19:50.127 "start": 0, 00:19:50.127 "length": 8192 00:19:50.127 }, 00:19:50.127 "queue_depth": 128, 00:19:50.127 "io_size": 4096, 00:19:50.127 "runtime": 10.025236, 00:19:50.127 "iops": 5828.092226457312, 00:19:50.127 "mibps": 22.765985259598875, 00:19:50.127 "io_failed": 0, 00:19:50.127 "io_timeout": 0, 00:19:50.127 "avg_latency_us": 21922.673246160513, 00:19:50.127 "min_latency_us": 6198.613333333334, 00:19:50.127 "max_latency_us": 23374.506666666668 00:19:50.127 } 00:19:50.127 ], 00:19:50.127 "core_count": 1 00:19:50.127 } 00:19:50.127 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:50.127 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3793681 00:19:50.127 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3793681 ']' 00:19:50.127 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3793681 00:19:50.127 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:50.127 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:50.127 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3793681 00:19:50.127 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:50.127 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:50.127 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3793681' 00:19:50.127 killing process with pid 3793681 00:19:50.127 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3793681 00:19:50.127 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.127 00:19:50.127 Latency(us) 00:19:50.127 [2024-11-06T14:32:08.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.127 [2024-11-06T14:32:08.110Z] =================================================================================================================== 00:19:50.127 [2024-11-06T14:32:08.110Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.127 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3793681 00:19:50.127 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.TqfbSgrgHa 00:19:50.127 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TqfbSgrgHa 00:19:50.127 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:50.127 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TqfbSgrgHa 00:19:50.127 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:50.127 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.127 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:50.128 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.128 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TqfbSgrgHa 00:19:50.128 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:50.128 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:50.128 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:50.128 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TqfbSgrgHa 00:19:50.128 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:50.128 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3795804 00:19:50.128 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.128 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3795804 /var/tmp/bdevperf.sock 00:19:50.128 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:50.128 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3795804 ']' 00:19:50.128 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.128 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:50.128 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.128 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:50.128 15:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.128 [2024-11-06 15:32:08.033511] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:19:50.128 [2024-11-06 15:32:08.033570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3795804 ] 00:19:50.388 [2024-11-06 15:32:08.115947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.388 [2024-11-06 15:32:08.144982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.959 15:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:50.959 15:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:50.959 15:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TqfbSgrgHa 00:19:51.219 [2024-11-06 15:32:08.960099] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.TqfbSgrgHa': 0100666 00:19:51.219 [2024-11-06 15:32:08.960121] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:51.219 request: 00:19:51.219 { 00:19:51.219 "name": "key0", 00:19:51.219 "path": "/tmp/tmp.TqfbSgrgHa", 00:19:51.219 "method": "keyring_file_add_key", 00:19:51.219 "req_id": 1 00:19:51.219 } 00:19:51.219 Got JSON-RPC error response 00:19:51.219 response: 00:19:51.219 { 00:19:51.219 "code": -1, 00:19:51.219 "message": "Operation not permitted" 00:19:51.219 } 00:19:51.219 15:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:51.219 [2024-11-06 15:32:09.128598] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.219 [2024-11-06 15:32:09.128618] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:51.219 request: 00:19:51.219 { 00:19:51.219 "name": "TLSTEST", 00:19:51.219 "trtype": "tcp", 00:19:51.219 "traddr": "10.0.0.2", 00:19:51.219 "adrfam": "ipv4", 00:19:51.219 "trsvcid": "4420", 00:19:51.219 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.219 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.219 "prchk_reftag": false, 00:19:51.219 "prchk_guard": false, 00:19:51.219 "hdgst": false, 00:19:51.219 "ddgst": false, 00:19:51.219 "psk": "key0", 00:19:51.219 "allow_unrecognized_csi": false, 00:19:51.219 "method": "bdev_nvme_attach_controller", 00:19:51.219 "req_id": 1 00:19:51.219 } 00:19:51.219 Got JSON-RPC error response 00:19:51.219 response: 00:19:51.219 { 00:19:51.219 "code": -126, 00:19:51.219 "message": "Required key not available" 00:19:51.219 } 00:19:51.219 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3795804 00:19:51.219 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3795804 ']' 00:19:51.219 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3795804 00:19:51.219 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:51.219 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:51.219 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3795804 00:19:51.480 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:51.480 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:51.480 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3795804' 00:19:51.480 killing process with pid 3795804 00:19:51.480 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3795804 00:19:51.480 Received shutdown signal, test time was about 10.000000 seconds 00:19:51.480 00:19:51.480 Latency(us) 00:19:51.480 [2024-11-06T14:32:09.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.480 [2024-11-06T14:32:09.463Z] =================================================================================================================== 00:19:51.480 [2024-11-06T14:32:09.463Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:51.480 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3795804 00:19:51.480 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:51.480 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:51.480 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:51.480 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:51.480 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:51.480 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3793126 00:19:51.480 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3793126 ']' 00:19:51.480 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3793126 00:19:51.480 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:51.480 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:51.480 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3793126 00:19:51.480 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:51.480 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:51.480 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3793126' 00:19:51.480 killing process with pid 3793126 00:19:51.480 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3793126 00:19:51.480 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3793126 00:19:51.741 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:51.741 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:51.741 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:51.741 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.741 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3796155 00:19:51.741 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:51.741 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3796155 00:19:51.741 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3796155 ']' 00:19:51.741 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.741 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:51.741 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.741 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:51.741 15:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.741 [2024-11-06 15:32:09.556616] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:19:51.741 [2024-11-06 15:32:09.556672] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.741 [2024-11-06 15:32:09.646046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.741 [2024-11-06 15:32:09.675287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.741 [2024-11-06 15:32:09.675315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.741 [2024-11-06 15:32:09.675321] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.741 [2024-11-06 15:32:09.675325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.741 [2024-11-06 15:32:09.675329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.741 [2024-11-06 15:32:09.675791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.682 15:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:52.683 15:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:52.683 15:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:52.683 15:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:52.683 15:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.683 15:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.683 15:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.TqfbSgrgHa 00:19:52.683 15:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:52.683 15:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.TqfbSgrgHa 00:19:52.683 15:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:52.683 15:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:52.683 15:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:52.683 15:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:52.683 15:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.TqfbSgrgHa 00:19:52.683 15:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TqfbSgrgHa 00:19:52.683 15:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:52.683 [2024-11-06 15:32:10.540444] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.683 15:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:52.943 15:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:52.943 [2024-11-06 15:32:10.901322] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:52.943 [2024-11-06 15:32:10.901510] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.203 15:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:53.203 malloc0 00:19:53.203 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:53.463 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TqfbSgrgHa 00:19:53.463 [2024-11-06 15:32:11.440296] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.TqfbSgrgHa': 0100666 00:19:53.463 [2024-11-06 15:32:11.440314] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:53.724 request: 00:19:53.724 { 00:19:53.724 "name": "key0", 00:19:53.724 "path": "/tmp/tmp.TqfbSgrgHa", 00:19:53.724 "method": "keyring_file_add_key", 00:19:53.724 "req_id": 1 00:19:53.724 } 00:19:53.724 Got JSON-RPC error response 00:19:53.724 response: 00:19:53.724 { 00:19:53.724 "code": -1, 00:19:53.724 "message": "Operation not permitted" 00:19:53.724 } 00:19:53.724 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:53.724 [2024-11-06 15:32:11.616756] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:53.724 [2024-11-06 15:32:11.616782] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:53.724 request: 00:19:53.724 { 00:19:53.724 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.724 "host": "nqn.2016-06.io.spdk:host1", 00:19:53.724 "psk": "key0", 00:19:53.724 "method": "nvmf_subsystem_add_host", 00:19:53.724 "req_id": 1 00:19:53.724 } 00:19:53.724 Got JSON-RPC error response 00:19:53.724 response: 00:19:53.724 { 00:19:53.724 "code": -32603, 00:19:53.724 "message": "Internal error" 00:19:53.724 } 00:19:53.724 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:53.724 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:53.724 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:53.724 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:53.724 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3796155 00:19:53.724 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3796155 ']' 00:19:53.724 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3796155 00:19:53.724 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:53.724 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:53.724 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3796155 00:19:53.986 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:53.986 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:53.986 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3796155' 00:19:53.986 killing process with pid 3796155 00:19:53.986 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3796155 00:19:53.986 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3796155 00:19:53.986 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.TqfbSgrgHa 00:19:53.986 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:53.986 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:53.986 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:53.986 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.986 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3796627 00:19:53.986 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3796627 00:19:53.986 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:53.986 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3796627 ']' 00:19:53.986 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.986 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:53.986 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.986 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:53.986 15:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.986 [2024-11-06 15:32:11.887883] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:19:53.986 [2024-11-06 15:32:11.887937] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.247 [2024-11-06 15:32:11.980265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.247 [2024-11-06 15:32:12.008867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.247 [2024-11-06 15:32:12.008897] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.247 [2024-11-06 15:32:12.008903] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:54.247 [2024-11-06 15:32:12.008908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:54.247 [2024-11-06 15:32:12.008912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.247 [2024-11-06 15:32:12.009383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.818 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:54.818 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:54.818 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:54.818 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:54.818 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.818 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.818 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.TqfbSgrgHa 00:19:54.818 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TqfbSgrgHa 00:19:54.818 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:55.078 [2024-11-06 15:32:12.878051] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.079 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:55.339 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:55.339 [2024-11-06 15:32:13.238940] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:55.339 [2024-11-06 15:32:13.239130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.339 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:55.600 malloc0 00:19:55.600 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:55.861 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TqfbSgrgHa 00:19:55.861 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:56.122 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3797182 00:19:56.122 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:56.122 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:56.122 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3797182 /var/tmp/bdevperf.sock 00:19:56.122 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3797182 ']' 00:19:56.122 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.122 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:56.122 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.122 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:56.122 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.122 [2024-11-06 15:32:14.019793] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:19:56.122 [2024-11-06 15:32:14.019850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3797182 ] 00:19:56.383 [2024-11-06 15:32:14.110303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.383 [2024-11-06 15:32:14.145216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.953 15:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:56.953 15:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:56.953 15:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TqfbSgrgHa 00:19:57.214 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:57.215 [2024-11-06 15:32:15.150283] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:57.475 TLSTESTn1 00:19:57.475 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:57.736 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:57.736 "subsystems": [ 00:19:57.736 { 00:19:57.736 "subsystem": "keyring", 00:19:57.736 "config": [ 00:19:57.736 { 00:19:57.736 "method": "keyring_file_add_key", 00:19:57.736 "params": { 00:19:57.736 "name": "key0", 00:19:57.736 "path": "/tmp/tmp.TqfbSgrgHa" 00:19:57.736 } 00:19:57.736 } 00:19:57.736 ] 00:19:57.736 }, 00:19:57.736 { 00:19:57.736 "subsystem": "iobuf", 00:19:57.736 "config": [ 00:19:57.736 { 00:19:57.736 "method": "iobuf_set_options", 00:19:57.736 "params": { 00:19:57.736 "small_pool_count": 8192, 00:19:57.736 "large_pool_count": 1024, 00:19:57.736 "small_bufsize": 8192, 00:19:57.736 "large_bufsize": 135168, 00:19:57.736 "enable_numa": false 00:19:57.736 } 00:19:57.737 } 00:19:57.737 ] 00:19:57.737 }, 00:19:57.737 { 00:19:57.737 "subsystem": "sock", 00:19:57.737 "config": [ 00:19:57.737 { 00:19:57.737 "method": "sock_set_default_impl", 00:19:57.737 "params": { 00:19:57.737 "impl_name": "posix" 00:19:57.737 } 00:19:57.737 }, 00:19:57.737 { 00:19:57.737 "method": "sock_impl_set_options", 00:19:57.737 "params": { 00:19:57.737 "impl_name": "ssl", 00:19:57.737 "recv_buf_size": 4096, 00:19:57.737 "send_buf_size": 4096, 00:19:57.737 "enable_recv_pipe": true, 00:19:57.737 "enable_quickack": false, 00:19:57.737 "enable_placement_id": 0, 00:19:57.737 "enable_zerocopy_send_server": true, 00:19:57.737 "enable_zerocopy_send_client": false, 00:19:57.737 "zerocopy_threshold": 0, 00:19:57.737 "tls_version": 0, 00:19:57.737 "enable_ktls": false 00:19:57.737 } 00:19:57.737 }, 00:19:57.737 { 00:19:57.737 "method": "sock_impl_set_options", 00:19:57.737 "params": { 00:19:57.737 "impl_name": "posix", 00:19:57.737 "recv_buf_size": 2097152, 00:19:57.737 "send_buf_size": 2097152, 00:19:57.737 "enable_recv_pipe": true, 00:19:57.737 "enable_quickack": false, 00:19:57.737 "enable_placement_id": 0, 00:19:57.737 "enable_zerocopy_send_server": true, 00:19:57.737 "enable_zerocopy_send_client": false, 00:19:57.737 "zerocopy_threshold": 0, 00:19:57.737 "tls_version": 0, 00:19:57.737 "enable_ktls": false 00:19:57.737 } 00:19:57.737 } 00:19:57.737 ] 00:19:57.737 }, 00:19:57.737 { 00:19:57.737 "subsystem": "vmd", 00:19:57.737 "config": [] 00:19:57.737 }, 00:19:57.737 { 00:19:57.737 "subsystem": "accel", 00:19:57.737 "config": [ 00:19:57.737 { 00:19:57.737 "method": "accel_set_options", 00:19:57.737 "params": { 00:19:57.737 "small_cache_size": 128, 00:19:57.737 "large_cache_size": 16, 00:19:57.737 "task_count": 2048, 00:19:57.737 "sequence_count": 2048, 00:19:57.737 "buf_count": 2048 00:19:57.737 } 00:19:57.737 } 00:19:57.737 ] 00:19:57.737 }, 00:19:57.737 { 00:19:57.737 "subsystem": "bdev", 00:19:57.737 "config": [ 00:19:57.737 { 00:19:57.737 "method": "bdev_set_options", 00:19:57.737 "params": { 00:19:57.737 "bdev_io_pool_size": 65535, 00:19:57.737 "bdev_io_cache_size": 256, 00:19:57.737 "bdev_auto_examine": true, 00:19:57.737 "iobuf_small_cache_size": 128, 00:19:57.737 "iobuf_large_cache_size": 16 00:19:57.737 } 00:19:57.737 }, 00:19:57.737 { 00:19:57.737 "method": "bdev_raid_set_options", 00:19:57.737 "params": { 00:19:57.737 "process_window_size_kb": 1024, 00:19:57.737 "process_max_bandwidth_mb_sec": 0 00:19:57.737 } 00:19:57.737 }, 00:19:57.737 { 00:19:57.737 "method": "bdev_iscsi_set_options", 00:19:57.737 "params": { 00:19:57.737 "timeout_sec": 30 00:19:57.737 } 00:19:57.737 }, 00:19:57.737 { 00:19:57.737 "method": "bdev_nvme_set_options", 00:19:57.737 "params": { 00:19:57.737 "action_on_timeout": "none", 00:19:57.737 "timeout_us": 0, 00:19:57.737 "timeout_admin_us": 0, 00:19:57.737 "keep_alive_timeout_ms": 10000, 00:19:57.737 "arbitration_burst": 0, 00:19:57.737 "low_priority_weight": 0, 00:19:57.737 "medium_priority_weight": 0, 00:19:57.737 "high_priority_weight": 0, 00:19:57.737 "nvme_adminq_poll_period_us": 10000, 00:19:57.737 "nvme_ioq_poll_period_us": 0, 00:19:57.737 "io_queue_requests": 0, 00:19:57.737 "delay_cmd_submit": true, 00:19:57.737 "transport_retry_count": 4, 00:19:57.737 "bdev_retry_count": 3, 00:19:57.737 "transport_ack_timeout": 0, 00:19:57.737 "ctrlr_loss_timeout_sec": 0, 00:19:57.737 "reconnect_delay_sec": 0, 00:19:57.737 "fast_io_fail_timeout_sec": 0, 00:19:57.737 "disable_auto_failback": false, 00:19:57.737 "generate_uuids": false, 00:19:57.737 "transport_tos": 0, 00:19:57.737 "nvme_error_stat": false, 00:19:57.737 "rdma_srq_size": 0, 00:19:57.737 "io_path_stat": false, 00:19:57.737 "allow_accel_sequence": false, 00:19:57.737 "rdma_max_cq_size": 0, 00:19:57.737 "rdma_cm_event_timeout_ms": 0, 00:19:57.737 "dhchap_digests": [ 00:19:57.737 "sha256", 00:19:57.737 "sha384", 00:19:57.737 "sha512" 00:19:57.737 ], 00:19:57.737 "dhchap_dhgroups": [ 00:19:57.737 "null", 00:19:57.737 "ffdhe2048", 00:19:57.737 "ffdhe3072", 00:19:57.737 "ffdhe4096", 00:19:57.737 "ffdhe6144", 00:19:57.737 "ffdhe8192" 00:19:57.737 ] 00:19:57.737 } 00:19:57.737 }, 00:19:57.737 { 00:19:57.737 "method": "bdev_nvme_set_hotplug", 00:19:57.737 "params": { 00:19:57.737 "period_us": 100000, 00:19:57.737 "enable": false 00:19:57.737 } 00:19:57.737 }, 00:19:57.737 { 00:19:57.737 "method": "bdev_malloc_create", 00:19:57.737 "params": { 00:19:57.737 "name": "malloc0", 00:19:57.737 "num_blocks": 8192, 00:19:57.737 "block_size": 4096, 00:19:57.737 "physical_block_size": 4096, 00:19:57.737 "uuid": "4e4aecef-238f-470b-a0fc-5bb5d09ff7dc", 00:19:57.737 "optimal_io_boundary": 0, 00:19:57.737 "md_size": 0, 00:19:57.737 "dif_type": 0, 00:19:57.737 "dif_is_head_of_md": false, 00:19:57.737 "dif_pi_format": 0 00:19:57.737 } 00:19:57.737 }, 00:19:57.737 { 00:19:57.737 "method": "bdev_wait_for_examine" 00:19:57.737 } 00:19:57.737 ] 00:19:57.737 }, 00:19:57.737 { 00:19:57.737 "subsystem": "nbd", 00:19:57.737 "config": [] 00:19:57.737 }, 00:19:57.737 { 00:19:57.737 "subsystem": "scheduler", 00:19:57.737 "config": [ 00:19:57.737 { 00:19:57.737 "method": "framework_set_scheduler", 00:19:57.737 "params": { 00:19:57.737 "name": "static" 00:19:57.737 } 00:19:57.737 } 00:19:57.737 ] 00:19:57.737 }, 00:19:57.737 { 00:19:57.737 "subsystem": "nvmf", 00:19:57.737 "config": [ 00:19:57.737 { 00:19:57.737 "method": "nvmf_set_config", 00:19:57.737 "params": { 00:19:57.737 "discovery_filter": "match_any", 00:19:57.737 "admin_cmd_passthru": { 00:19:57.737 "identify_ctrlr": false 00:19:57.737 }, 00:19:57.737 "dhchap_digests": [ 00:19:57.737 "sha256", 00:19:57.737 "sha384", 00:19:57.737 "sha512" 00:19:57.737 ], 00:19:57.737 "dhchap_dhgroups": [ 00:19:57.737 "null", 00:19:57.737 "ffdhe2048", 00:19:57.737 "ffdhe3072", 00:19:57.737 "ffdhe4096", 00:19:57.737 "ffdhe6144", 00:19:57.737 "ffdhe8192" 00:19:57.737 ] 00:19:57.737 } 00:19:57.737 }, 00:19:57.737 { 00:19:57.737 "method": "nvmf_set_max_subsystems", 00:19:57.737 "params": { 00:19:57.737 "max_subsystems": 1024 00:19:57.737 } 00:19:57.737 }, 00:19:57.737 { 00:19:57.737 "method": "nvmf_set_crdt", 00:19:57.737 "params": { 00:19:57.737 "crdt1": 0, 00:19:57.737 "crdt2": 0, 00:19:57.737 "crdt3": 0 00:19:57.737 } 00:19:57.737 }, 00:19:57.737 { 00:19:57.737 "method": "nvmf_create_transport", 00:19:57.737 "params": { 00:19:57.737 "trtype": "TCP", 00:19:57.737 "max_queue_depth": 128, 00:19:57.737 "max_io_qpairs_per_ctrlr": 127, 00:19:57.737 "in_capsule_data_size": 4096, 00:19:57.737 "max_io_size": 131072, 00:19:57.737 "io_unit_size": 131072, 00:19:57.737 "max_aq_depth": 128, 00:19:57.737 "num_shared_buffers": 511, 00:19:57.737 "buf_cache_size": 4294967295, 00:19:57.737 "dif_insert_or_strip": false, 00:19:57.737 "zcopy": false, 00:19:57.737 "c2h_success": false, 00:19:57.737 "sock_priority": 0, 00:19:57.737 "abort_timeout_sec": 1, 00:19:57.737 "ack_timeout": 0, 00:19:57.737 "data_wr_pool_size": 0 00:19:57.737 } 00:19:57.737 }, 00:19:57.737 { 00:19:57.737 "method": "nvmf_create_subsystem", 00:19:57.737 "params": { 00:19:57.737 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.737 "allow_any_host": false, 00:19:57.737 "serial_number": "SPDK00000000000001", 00:19:57.737 "model_number": "SPDK bdev Controller", 00:19:57.737 "max_namespaces": 10, 00:19:57.737 "min_cntlid": 1, 00:19:57.737 "max_cntlid": 65519, 00:19:57.738 "ana_reporting": false 00:19:57.738 } 00:19:57.738 }, 00:19:57.738 { 00:19:57.738 "method": "nvmf_subsystem_add_host", 00:19:57.738 "params": { 00:19:57.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.738 "host": "nqn.2016-06.io.spdk:host1", 00:19:57.738 "psk": "key0" 00:19:57.738 } 00:19:57.738 }, 00:19:57.738 { 00:19:57.738 "method": "nvmf_subsystem_add_ns", 00:19:57.738 "params": { 00:19:57.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.738 "namespace": { 00:19:57.738 "nsid": 1, 00:19:57.738 "bdev_name": "malloc0", 00:19:57.738 "nguid": "4E4AECEF238F470BA0FC5BB5D09FF7DC", 00:19:57.738 "uuid": "4e4aecef-238f-470b-a0fc-5bb5d09ff7dc", 00:19:57.738 "no_auto_visible": false 00:19:57.738 } 00:19:57.738 } 00:19:57.738 }, 00:19:57.738 { 00:19:57.738 "method": "nvmf_subsystem_add_listener", 00:19:57.738 "params": { 00:19:57.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.738 "listen_address": { 00:19:57.738 "trtype": "TCP", 00:19:57.738 "adrfam": "IPv4", 00:19:57.738 "traddr": "10.0.0.2", 00:19:57.738 "trsvcid": "4420" 00:19:57.738 }, 00:19:57.738 "secure_channel": true 00:19:57.738 } 00:19:57.738 } 00:19:57.738 ] 00:19:57.738 } 00:19:57.738 ] 00:19:57.738 }' 00:19:57.738 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:58.000 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:58.000 "subsystems": [ 00:19:58.000 { 00:19:58.000 "subsystem": "keyring", 00:19:58.000 "config": [ 00:19:58.000 { 00:19:58.000 "method": "keyring_file_add_key", 00:19:58.000 "params": { 00:19:58.000 "name": "key0", 00:19:58.000 "path": "/tmp/tmp.TqfbSgrgHa" 00:19:58.000 } 00:19:58.000 } 00:19:58.000 ] 00:19:58.000 }, 00:19:58.000 { 00:19:58.000 "subsystem": "iobuf", 00:19:58.000 "config": [ 00:19:58.000 { 00:19:58.000 "method": "iobuf_set_options", 00:19:58.000 "params": { 00:19:58.000 "small_pool_count": 8192, 00:19:58.000 "large_pool_count": 1024, 00:19:58.000 "small_bufsize": 8192, 00:19:58.000 "large_bufsize": 135168, 00:19:58.000 "enable_numa": false 00:19:58.000 } 00:19:58.000 } 00:19:58.000 ] 00:19:58.000 }, 00:19:58.000 { 00:19:58.000 "subsystem": "sock", 00:19:58.000 "config": [ 00:19:58.000 { 00:19:58.000 "method": "sock_set_default_impl", 00:19:58.000 "params": { 00:19:58.000 "impl_name": "posix" 00:19:58.000 } 00:19:58.000 }, 00:19:58.000 { 00:19:58.000 "method": "sock_impl_set_options", 00:19:58.000 "params": { 00:19:58.000 "impl_name": "ssl", 00:19:58.000 "recv_buf_size": 4096, 00:19:58.000 "send_buf_size": 4096, 00:19:58.000 "enable_recv_pipe": true, 00:19:58.000 "enable_quickack": false, 00:19:58.000 "enable_placement_id": 0, 00:19:58.000 "enable_zerocopy_send_server": true, 00:19:58.000 "enable_zerocopy_send_client": false, 00:19:58.000 "zerocopy_threshold": 0, 00:19:58.000 "tls_version": 0, 00:19:58.000 "enable_ktls": false 00:19:58.000 } 00:19:58.000 }, 00:19:58.000 { 00:19:58.000 "method": "sock_impl_set_options", 00:19:58.000 "params": { 00:19:58.000 "impl_name": "posix", 00:19:58.000 "recv_buf_size": 2097152, 00:19:58.000 "send_buf_size": 2097152, 00:19:58.000 "enable_recv_pipe": true, 00:19:58.000 "enable_quickack": false, 00:19:58.000 "enable_placement_id": 0, 00:19:58.000 "enable_zerocopy_send_server": true, 00:19:58.000 "enable_zerocopy_send_client": false, 00:19:58.000 "zerocopy_threshold": 0, 00:19:58.000 "tls_version": 0, 00:19:58.000 "enable_ktls": false 00:19:58.000 } 00:19:58.000 } 00:19:58.000 ] 00:19:58.000 }, 00:19:58.000 { 00:19:58.000 "subsystem": "vmd", 00:19:58.000 "config": [] 00:19:58.000 }, 00:19:58.000 { 00:19:58.000 "subsystem": "accel", 00:19:58.000 "config": [ 00:19:58.000 { 00:19:58.000 "method": "accel_set_options", 00:19:58.000 "params": { 00:19:58.000 "small_cache_size": 128, 00:19:58.000 "large_cache_size": 16, 00:19:58.000 "task_count": 2048, 00:19:58.000 "sequence_count": 2048, 00:19:58.000 "buf_count": 2048 00:19:58.000 } 00:19:58.000 } 00:19:58.000 ] 00:19:58.000 }, 00:19:58.000 { 00:19:58.000 "subsystem": "bdev", 00:19:58.000 "config": [ 00:19:58.000 { 00:19:58.000 "method": "bdev_set_options", 00:19:58.000 "params": { 00:19:58.000 "bdev_io_pool_size": 65535, 00:19:58.000 "bdev_io_cache_size": 256, 00:19:58.000 "bdev_auto_examine": true, 00:19:58.000 "iobuf_small_cache_size": 128, 00:19:58.000 "iobuf_large_cache_size": 16 00:19:58.000 } 00:19:58.000 }, 00:19:58.000 { 00:19:58.000 "method": "bdev_raid_set_options", 00:19:58.000 "params": { 00:19:58.000 "process_window_size_kb": 1024, 00:19:58.000 "process_max_bandwidth_mb_sec": 0 00:19:58.000 } 00:19:58.000 }, 00:19:58.000 { 00:19:58.000 "method": "bdev_iscsi_set_options", 00:19:58.000 "params": { 00:19:58.000 "timeout_sec": 30 00:19:58.000 } 00:19:58.000 }, 00:19:58.000 { 00:19:58.000 "method": "bdev_nvme_set_options", 00:19:58.000 "params": { 00:19:58.000 "action_on_timeout": "none", 00:19:58.000 "timeout_us": 0, 00:19:58.000 "timeout_admin_us": 0, 00:19:58.000 "keep_alive_timeout_ms": 10000, 00:19:58.000 "arbitration_burst": 0, 00:19:58.000 "low_priority_weight": 0, 00:19:58.000 "medium_priority_weight": 0, 00:19:58.000 "high_priority_weight": 0, 00:19:58.000 "nvme_adminq_poll_period_us": 10000, 00:19:58.000 "nvme_ioq_poll_period_us": 0, 00:19:58.000 "io_queue_requests": 512, 00:19:58.000 "delay_cmd_submit": true, 00:19:58.000 "transport_retry_count": 4, 00:19:58.000 "bdev_retry_count": 3, 00:19:58.000 "transport_ack_timeout": 0, 00:19:58.000 "ctrlr_loss_timeout_sec": 0, 00:19:58.000 "reconnect_delay_sec": 0, 00:19:58.000 "fast_io_fail_timeout_sec": 0, 00:19:58.000 "disable_auto_failback": false, 00:19:58.000 "generate_uuids": false, 00:19:58.000 "transport_tos": 0, 00:19:58.000 "nvme_error_stat": false, 00:19:58.000 "rdma_srq_size": 0, 00:19:58.000 "io_path_stat": false, 00:19:58.000 "allow_accel_sequence": false, 00:19:58.000 "rdma_max_cq_size": 0, 00:19:58.000 "rdma_cm_event_timeout_ms": 0, 00:19:58.000 "dhchap_digests": [ 00:19:58.000 "sha256", 00:19:58.000 "sha384", 00:19:58.000 "sha512" 00:19:58.000 ], 00:19:58.000 "dhchap_dhgroups": [ 00:19:58.000 "null", 00:19:58.000 "ffdhe2048", 00:19:58.000 "ffdhe3072", 00:19:58.000 "ffdhe4096", 00:19:58.000 "ffdhe6144", 00:19:58.000 "ffdhe8192" 00:19:58.000 ] 00:19:58.000 } 00:19:58.000 }, 00:19:58.000 { 00:19:58.000 "method": "bdev_nvme_attach_controller", 00:19:58.000 "params": { 00:19:58.000 "name": "TLSTEST", 00:19:58.000 "trtype": "TCP", 00:19:58.000 "adrfam": "IPv4", 00:19:58.000 "traddr": "10.0.0.2", 00:19:58.000 "trsvcid": "4420", 00:19:58.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.000 "prchk_reftag": false, 00:19:58.000 "prchk_guard": false, 00:19:58.000 "ctrlr_loss_timeout_sec": 0, 00:19:58.000 "reconnect_delay_sec": 0, 00:19:58.000 "fast_io_fail_timeout_sec": 0, 00:19:58.000 "psk": "key0", 00:19:58.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:58.000 "hdgst": false, 00:19:58.000 "ddgst": false, 00:19:58.000 "multipath": "multipath" 00:19:58.001 } 00:19:58.001 }, 00:19:58.001 { 00:19:58.001 "method": "bdev_nvme_set_hotplug", 00:19:58.001 "params": { 00:19:58.001 "period_us": 100000, 00:19:58.001 "enable": false 00:19:58.001 } 00:19:58.001 }, 00:19:58.001 { 00:19:58.001 "method": "bdev_wait_for_examine" 00:19:58.001 } 00:19:58.001 ] 00:19:58.001 }, 00:19:58.001 { 00:19:58.001 "subsystem": "nbd", 00:19:58.001 "config": [] 00:19:58.001 } 00:19:58.001 ] 00:19:58.001 }' 00:19:58.001 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3797182 00:19:58.001 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3797182 ']' 00:19:58.001 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3797182 00:19:58.001 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:58.001 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:58.001 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3797182 00:19:58.001 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:58.001 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:58.001 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3797182' 00:19:58.001 killing process with pid 3797182 00:19:58.001 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3797182 00:19:58.001 Received shutdown signal, test time was about 10.000000 seconds 00:19:58.001 00:19:58.001 Latency(us) 00:19:58.001 [2024-11-06T14:32:15.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.001 [2024-11-06T14:32:15.984Z] =================================================================================================================== 00:19:58.001 [2024-11-06T14:32:15.984Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:58.001 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3797182 00:19:58.001 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3796627 00:19:58.001 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3796627 ']' 00:19:58.001 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3796627 00:19:58.001 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:58.001 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:58.001 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3796627 00:19:58.262 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:58.262 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:58.262 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3796627' 00:19:58.262 killing process with pid 3796627 00:19:58.262 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3796627 00:19:58.262 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3796627 00:19:58.262 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:58.262 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:58.262 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:58.262 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.262 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:58.262 "subsystems": [ 00:19:58.262 { 00:19:58.262 "subsystem": "keyring", 00:19:58.262 "config": [ 00:19:58.262 { 00:19:58.262 "method": "keyring_file_add_key", 00:19:58.262 "params": { 00:19:58.262 "name": "key0", 00:19:58.262 "path": "/tmp/tmp.TqfbSgrgHa" 00:19:58.262 } 00:19:58.262 } 00:19:58.262 ] 00:19:58.262 }, 00:19:58.262 { 00:19:58.262 "subsystem": "iobuf", 00:19:58.262 "config": [ 00:19:58.262 { 00:19:58.262 "method": "iobuf_set_options", 00:19:58.262 "params": { 00:19:58.262 "small_pool_count": 8192, 00:19:58.262 "large_pool_count": 1024, 00:19:58.262 "small_bufsize": 8192, 00:19:58.262 "large_bufsize": 135168, 00:19:58.262 "enable_numa": false 00:19:58.262 } 00:19:58.262 } 00:19:58.262 ] 00:19:58.262 }, 00:19:58.262 { 00:19:58.262 "subsystem": "sock", 00:19:58.262 "config": [ 00:19:58.262 { 00:19:58.262 "method": "sock_set_default_impl", 00:19:58.262 "params": { 00:19:58.262 "impl_name": "posix" 00:19:58.262 } 00:19:58.262 }, 00:19:58.262 { 00:19:58.262 "method": "sock_impl_set_options", 00:19:58.262 "params": { 00:19:58.262 "impl_name": "ssl", 00:19:58.262 "recv_buf_size": 4096, 00:19:58.262 "send_buf_size": 4096, 00:19:58.262 "enable_recv_pipe": true, 00:19:58.262 "enable_quickack": false, 00:19:58.262 "enable_placement_id": 0, 00:19:58.262 "enable_zerocopy_send_server": true, 00:19:58.262 "enable_zerocopy_send_client": false, 00:19:58.262 "zerocopy_threshold": 0, 00:19:58.262 "tls_version": 0, 00:19:58.262 "enable_ktls": false 00:19:58.262 } 00:19:58.262 }, 00:19:58.262 { 00:19:58.262 "method": "sock_impl_set_options", 00:19:58.262 "params": { 00:19:58.262 "impl_name": "posix", 00:19:58.262 "recv_buf_size": 2097152, 00:19:58.262 "send_buf_size": 2097152, 00:19:58.262 "enable_recv_pipe": true, 00:19:58.262 "enable_quickack": false, 00:19:58.262 "enable_placement_id": 0, 00:19:58.262 "enable_zerocopy_send_server": true, 00:19:58.262 "enable_zerocopy_send_client": false, 00:19:58.262 "zerocopy_threshold": 0, 00:19:58.262 "tls_version": 0, 00:19:58.262 "enable_ktls": false 00:19:58.262 } 00:19:58.262 } 00:19:58.262 ] 00:19:58.262 }, 00:19:58.262 { 00:19:58.262 "subsystem": "vmd", 00:19:58.262 "config": [] 00:19:58.262 }, 00:19:58.262 { 00:19:58.262 "subsystem": "accel", 00:19:58.262 "config": [ 00:19:58.262 { 00:19:58.262 "method": "accel_set_options", 00:19:58.262 "params": { 00:19:58.262 "small_cache_size": 128, 00:19:58.262 "large_cache_size": 16, 00:19:58.262 "task_count": 2048, 00:19:58.262 "sequence_count": 2048, 00:19:58.262 "buf_count": 2048 00:19:58.262 } 00:19:58.262 } 00:19:58.262 ] 00:19:58.262 }, 00:19:58.262 { 00:19:58.263 "subsystem": "bdev", 00:19:58.263 "config": [ 00:19:58.263 { 00:19:58.263 "method": "bdev_set_options", 00:19:58.263 "params": { 00:19:58.263 "bdev_io_pool_size": 65535, 00:19:58.263 "bdev_io_cache_size": 256, 00:19:58.263 "bdev_auto_examine": true, 00:19:58.263 "iobuf_small_cache_size": 128, 00:19:58.263 "iobuf_large_cache_size": 16 00:19:58.263 } 00:19:58.263 }, 00:19:58.263 { 00:19:58.263 "method": "bdev_raid_set_options", 00:19:58.263 "params": { 00:19:58.263 "process_window_size_kb": 1024, 00:19:58.263 "process_max_bandwidth_mb_sec": 0 00:19:58.263 } 00:19:58.263 }, 00:19:58.263 { 00:19:58.263 "method": "bdev_iscsi_set_options", 00:19:58.263 "params": { 00:19:58.263 "timeout_sec": 30 00:19:58.263 } 00:19:58.263 }, 00:19:58.263 { 00:19:58.263 "method": "bdev_nvme_set_options", 00:19:58.263 "params": { 00:19:58.263 "action_on_timeout": "none", 00:19:58.263 "timeout_us": 0, 00:19:58.263 "timeout_admin_us": 0, 00:19:58.263 "keep_alive_timeout_ms": 10000, 00:19:58.263 "arbitration_burst": 0, 00:19:58.263 "low_priority_weight": 0, 00:19:58.263 "medium_priority_weight": 0, 00:19:58.263 "high_priority_weight": 0, 00:19:58.263 "nvme_adminq_poll_period_us": 10000, 00:19:58.263 "nvme_ioq_poll_period_us": 0, 00:19:58.263 "io_queue_requests": 0, 00:19:58.263 "delay_cmd_submit": true, 00:19:58.263 "transport_retry_count": 4, 00:19:58.263 "bdev_retry_count": 3, 00:19:58.263 "transport_ack_timeout": 0, 00:19:58.263 "ctrlr_loss_timeout_sec": 0, 00:19:58.263 "reconnect_delay_sec": 0, 00:19:58.263 "fast_io_fail_timeout_sec": 0, 00:19:58.263 "disable_auto_failback": false, 00:19:58.263 "generate_uuids": false, 00:19:58.263 "transport_tos": 0, 00:19:58.263 "nvme_error_stat": false, 00:19:58.263 "rdma_srq_size": 0, 00:19:58.263 "io_path_stat": false, 00:19:58.263 "allow_accel_sequence": false, 00:19:58.263 "rdma_max_cq_size": 0, 00:19:58.263 "rdma_cm_event_timeout_ms": 0, 00:19:58.263 "dhchap_digests": [ 00:19:58.263 "sha256", 00:19:58.263 "sha384", 00:19:58.263 "sha512" 00:19:58.263 ], 00:19:58.263 "dhchap_dhgroups": [ 00:19:58.263 "null", 00:19:58.263 "ffdhe2048", 00:19:58.263 "ffdhe3072", 00:19:58.263 "ffdhe4096", 00:19:58.263 "ffdhe6144", 00:19:58.263 "ffdhe8192" 00:19:58.263 ] 00:19:58.263 } 00:19:58.263 }, 00:19:58.263 { 00:19:58.263 "method": "bdev_nvme_set_hotplug", 00:19:58.263 "params": { 00:19:58.263 "period_us": 100000, 00:19:58.263 "enable": false 00:19:58.263 } 00:19:58.263 }, 00:19:58.263 { 00:19:58.263 "method": "bdev_malloc_create", 00:19:58.263 "params": { 00:19:58.263 "name": "malloc0", 00:19:58.263 "num_blocks": 8192, 00:19:58.263 "block_size": 4096, 00:19:58.263 "physical_block_size": 4096, 00:19:58.263 "uuid": "4e4aecef-238f-470b-a0fc-5bb5d09ff7dc", 00:19:58.263 "optimal_io_boundary": 0, 00:19:58.263 "md_size": 0, 00:19:58.263 "dif_type": 0, 00:19:58.263 "dif_is_head_of_md": false, 00:19:58.263 "dif_pi_format": 0 00:19:58.263 } 00:19:58.263 }, 00:19:58.263 { 00:19:58.263 "method": "bdev_wait_for_examine" 00:19:58.263 } 00:19:58.263 ] 00:19:58.263 }, 00:19:58.263 { 00:19:58.263 "subsystem": "nbd", 00:19:58.263 "config": [] 00:19:58.263 }, 00:19:58.263 { 00:19:58.263 "subsystem": "scheduler", 00:19:58.263 "config": [ 00:19:58.263 { 00:19:58.263 "method": "framework_set_scheduler", 00:19:58.263 "params": { 00:19:58.263 "name": "static" 00:19:58.263 } 00:19:58.263 } 00:19:58.263 ] 00:19:58.263 }, 00:19:58.263 { 00:19:58.263 "subsystem": "nvmf", 00:19:58.263 "config": [ 00:19:58.263 { 00:19:58.263 "method": "nvmf_set_config", 00:19:58.263 "params": { 00:19:58.263 "discovery_filter": "match_any", 00:19:58.263 "admin_cmd_passthru": { 00:19:58.263 "identify_ctrlr": false 00:19:58.263 }, 00:19:58.263 "dhchap_digests": [ 00:19:58.263 "sha256", 00:19:58.263 "sha384", 00:19:58.263 "sha512" 00:19:58.263 ], 00:19:58.263 "dhchap_dhgroups": [ 00:19:58.263 "null", 00:19:58.263 "ffdhe2048", 00:19:58.263 "ffdhe3072", 00:19:58.263 "ffdhe4096", 00:19:58.263 "ffdhe6144", 00:19:58.263 "ffdhe8192" 00:19:58.263 ] 00:19:58.263 } 00:19:58.263 }, 00:19:58.263 { 00:19:58.263 "method": "nvmf_set_max_subsystems", 00:19:58.263 "params": { 00:19:58.263 "max_subsystems": 1024 00:19:58.263 } 00:19:58.263 }, 00:19:58.263 { 00:19:58.263 "method": "nvmf_set_crdt", 00:19:58.263 "params": { 00:19:58.263 "crdt1": 0, 00:19:58.263 "crdt2": 0, 00:19:58.263 "crdt3": 0 00:19:58.263 } 00:19:58.263 }, 00:19:58.263 { 00:19:58.263 "method": "nvmf_create_transport", 00:19:58.263 "params": { 00:19:58.263 "trtype": "TCP", 00:19:58.263 "max_queue_depth": 128, 00:19:58.263 "max_io_qpairs_per_ctrlr": 127, 00:19:58.263 "in_capsule_data_size": 4096, 00:19:58.263 "max_io_size": 131072, 00:19:58.263 "io_unit_size": 131072, 00:19:58.263 "max_aq_depth": 128, 00:19:58.263 "num_shared_buffers": 511, 00:19:58.263 "buf_cache_size": 4294967295, 00:19:58.263 "dif_insert_or_strip": false, 00:19:58.263 "zcopy": false, 00:19:58.263 "c2h_success": false, 00:19:58.263 "sock_priority": 0, 00:19:58.263 "abort_timeout_sec": 1, 00:19:58.263 "ack_timeout": 0, 00:19:58.263 "data_wr_pool_size": 0 00:19:58.263 } 00:19:58.263 }, 00:19:58.263 { 00:19:58.263 "method": "nvmf_create_subsystem", 00:19:58.263 "params": { 00:19:58.263 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.263 "allow_any_host": false, 00:19:58.263 "serial_number": "SPDK00000000000001", 00:19:58.263 "model_number": "SPDK bdev Controller", 00:19:58.263 "max_namespaces": 10, 00:19:58.263 "min_cntlid": 1, 00:19:58.263 "max_cntlid": 65519, 00:19:58.263 "ana_reporting": false 00:19:58.263 } 00:19:58.263 }, 00:19:58.263 { 00:19:58.263 "method": "nvmf_subsystem_add_host", 00:19:58.263 "params": { 00:19:58.263 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.263 "host": "nqn.2016-06.io.spdk:host1", 00:19:58.263 "psk": "key0" 00:19:58.263 } 00:19:58.263 }, 00:19:58.263 { 00:19:58.263 "method": "nvmf_subsystem_add_ns", 00:19:58.263 "params": { 00:19:58.263 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.263 "namespace": { 00:19:58.263 "nsid": 1, 00:19:58.263 "bdev_name": "malloc0", 00:19:58.263 "nguid": "4E4AECEF238F470BA0FC5BB5D09FF7DC", 00:19:58.263 "uuid": "4e4aecef-238f-470b-a0fc-5bb5d09ff7dc", 00:19:58.263 "no_auto_visible": false 00:19:58.263 } 00:19:58.263 } 00:19:58.263 }, 00:19:58.263 { 00:19:58.263 "method": "nvmf_subsystem_add_listener", 00:19:58.263 "params": { 00:19:58.263 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.263 "listen_address": { 00:19:58.263 "trtype": "TCP", 00:19:58.263 "adrfam": "IPv4", 00:19:58.263 "traddr": "10.0.0.2", 00:19:58.263 "trsvcid": "4420" 00:19:58.263 }, 00:19:58.263 "secure_channel": true 00:19:58.263 } 00:19:58.263 } 00:19:58.263 ] 00:19:58.263 } 00:19:58.263 ] 00:19:58.263 }' 00:19:58.263 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3797570 00:19:58.263 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3797570 00:19:58.263 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:58.263 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3797570 ']' 00:19:58.263 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.263 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:58.263 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.263 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:58.263 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.263 [2024-11-06 15:32:16.178861] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:19:58.264 [2024-11-06 15:32:16.178917] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.524 [2024-11-06 15:32:16.272007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.524 [2024-11-06 15:32:16.301251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.524 [2024-11-06 15:32:16.301281] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.524 [2024-11-06 15:32:16.301286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.524 [2024-11-06 15:32:16.301291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.524 [2024-11-06 15:32:16.301296] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.524 [2024-11-06 15:32:16.301791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.524 [2024-11-06 15:32:16.495846] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.784 [2024-11-06 15:32:16.527873] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:58.784 [2024-11-06 15:32:16.528070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:59.046 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:59.046 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:59.046 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:59.046 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:59.046 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.046 15:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.046 15:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3797678 00:19:59.046 15:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3797678 /var/tmp/bdevperf.sock 00:19:59.046 15:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3797678 ']' 00:19:59.046 15:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:59.046 15:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:59.046 15:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:59.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:59.046 15:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:59.046 15:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:59.046 15:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.046 15:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:59.046 "subsystems": [ 00:19:59.046 { 00:19:59.046 "subsystem": "keyring", 00:19:59.046 "config": [ 00:19:59.046 { 00:19:59.046 "method": "keyring_file_add_key", 00:19:59.046 "params": { 00:19:59.046 "name": "key0", 00:19:59.046 "path": "/tmp/tmp.TqfbSgrgHa" 00:19:59.046 } 00:19:59.046 } 00:19:59.046 ] 00:19:59.046 }, 00:19:59.046 { 00:19:59.046 "subsystem": "iobuf", 00:19:59.046 "config": [ 00:19:59.046 { 00:19:59.046 "method": "iobuf_set_options", 00:19:59.046 "params": { 00:19:59.046 "small_pool_count": 8192, 00:19:59.046 "large_pool_count": 1024, 00:19:59.046 "small_bufsize": 8192, 00:19:59.046 "large_bufsize": 135168, 00:19:59.046 "enable_numa": false 00:19:59.046 } 00:19:59.046 } 00:19:59.046 ] 00:19:59.046 }, 00:19:59.046 { 00:19:59.046 "subsystem": "sock", 00:19:59.046 "config": [ 00:19:59.046 { 00:19:59.046 "method": "sock_set_default_impl", 00:19:59.046 "params": { 00:19:59.046 "impl_name": "posix" 00:19:59.046 } 00:19:59.046 }, 00:19:59.046 { 00:19:59.046 "method": "sock_impl_set_options", 00:19:59.046 "params": { 00:19:59.046 "impl_name": "ssl", 00:19:59.046 "recv_buf_size": 4096, 00:19:59.046 "send_buf_size": 4096, 00:19:59.046 "enable_recv_pipe": true, 00:19:59.046 "enable_quickack": false, 00:19:59.046 "enable_placement_id": 0, 00:19:59.046 "enable_zerocopy_send_server": true, 00:19:59.046 "enable_zerocopy_send_client": false, 00:19:59.046 "zerocopy_threshold": 0, 00:19:59.046 "tls_version": 0, 00:19:59.046 "enable_ktls": false 00:19:59.046 } 00:19:59.046 }, 00:19:59.046 { 00:19:59.046 "method": "sock_impl_set_options", 00:19:59.046 "params": { 00:19:59.046 "impl_name": "posix", 00:19:59.046 "recv_buf_size": 2097152, 00:19:59.046 "send_buf_size": 2097152, 00:19:59.046 "enable_recv_pipe": true, 00:19:59.046 "enable_quickack": false, 00:19:59.046 "enable_placement_id": 0, 00:19:59.046 "enable_zerocopy_send_server": true, 00:19:59.046 "enable_zerocopy_send_client": false, 00:19:59.046 "zerocopy_threshold": 0, 00:19:59.046 "tls_version": 0, 00:19:59.046 "enable_ktls": false 00:19:59.046 } 00:19:59.046 } 00:19:59.046 ] 00:19:59.046 }, 00:19:59.046 { 00:19:59.046 "subsystem": "vmd", 00:19:59.046 "config": [] 00:19:59.046 }, 00:19:59.046 { 00:19:59.046 "subsystem": "accel", 00:19:59.046 "config": [ 00:19:59.046 { 00:19:59.046 "method": "accel_set_options", 00:19:59.046 "params": { 00:19:59.046 "small_cache_size": 128, 00:19:59.046 "large_cache_size": 16, 00:19:59.046 "task_count": 2048, 00:19:59.046 "sequence_count": 2048, 00:19:59.046 "buf_count": 2048 00:19:59.046 } 00:19:59.046 } 00:19:59.046 ] 00:19:59.046 }, 00:19:59.046 { 00:19:59.046 "subsystem": "bdev", 00:19:59.046 "config": [ 00:19:59.046 { 00:19:59.046 "method": "bdev_set_options", 00:19:59.046 "params": { 00:19:59.046 "bdev_io_pool_size": 65535, 00:19:59.046 "bdev_io_cache_size": 256, 00:19:59.046 "bdev_auto_examine": true, 00:19:59.046 "iobuf_small_cache_size": 128, 00:19:59.046 "iobuf_large_cache_size": 16 00:19:59.046 } 00:19:59.046 }, 00:19:59.046 { 00:19:59.046 "method": "bdev_raid_set_options", 00:19:59.046 "params": { 00:19:59.046 "process_window_size_kb": 1024, 00:19:59.046 "process_max_bandwidth_mb_sec": 0 00:19:59.046 } 00:19:59.046 }, 00:19:59.046 { 00:19:59.046 "method": "bdev_iscsi_set_options", 00:19:59.046 "params": { 00:19:59.046 "timeout_sec": 30 00:19:59.046 } 00:19:59.046 }, 00:19:59.046 { 00:19:59.046 "method": "bdev_nvme_set_options", 00:19:59.046 "params": { 00:19:59.046 "action_on_timeout": "none", 00:19:59.046 "timeout_us": 0, 00:19:59.046 "timeout_admin_us": 0, 00:19:59.046 "keep_alive_timeout_ms": 10000, 00:19:59.046 "arbitration_burst": 0, 00:19:59.046 "low_priority_weight": 0, 00:19:59.046 "medium_priority_weight": 0, 00:19:59.046 "high_priority_weight": 0, 00:19:59.046 "nvme_adminq_poll_period_us": 10000, 00:19:59.046 "nvme_ioq_poll_period_us": 0, 00:19:59.046 "io_queue_requests": 512, 00:19:59.046 "delay_cmd_submit": true, 00:19:59.046 "transport_retry_count": 4, 00:19:59.046 "bdev_retry_count": 3, 00:19:59.046 "transport_ack_timeout": 0, 00:19:59.046 "ctrlr_loss_timeout_sec": 0, 00:19:59.046 "reconnect_delay_sec": 0, 00:19:59.046 "fast_io_fail_timeout_sec": 0, 00:19:59.046 "disable_auto_failback": false, 00:19:59.046 "generate_uuids": false, 00:19:59.046 "transport_tos": 0, 00:19:59.046 "nvme_error_stat": false, 00:19:59.046 "rdma_srq_size": 0, 00:19:59.046 "io_path_stat": false, 00:19:59.046 "allow_accel_sequence": false, 00:19:59.046 "rdma_max_cq_size": 0, 00:19:59.046 "rdma_cm_event_timeout_ms": 0, 00:19:59.046 "dhchap_digests": [ 00:19:59.046 "sha256", 00:19:59.046 "sha384", 00:19:59.046 "sha512" 00:19:59.046 ], 00:19:59.046 "dhchap_dhgroups": [ 00:19:59.046 "null", 00:19:59.046 "ffdhe2048", 00:19:59.046 "ffdhe3072", 00:19:59.046 "ffdhe4096", 00:19:59.046 "ffdhe6144", 00:19:59.046 "ffdhe8192" 00:19:59.046 ] 00:19:59.046 } 00:19:59.046 }, 00:19:59.046 { 00:19:59.046 "method": "bdev_nvme_attach_controller", 00:19:59.046 "params": { 00:19:59.046 "name": "TLSTEST", 00:19:59.046 "trtype": "TCP", 00:19:59.046 "adrfam": "IPv4", 00:19:59.046 "traddr": "10.0.0.2", 00:19:59.046 "trsvcid": "4420", 00:19:59.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.047 "prchk_reftag": false, 00:19:59.047 "prchk_guard": false, 00:19:59.047 "ctrlr_loss_timeout_sec": 0, 00:19:59.047 "reconnect_delay_sec": 0, 00:19:59.047 "fast_io_fail_timeout_sec": 0, 00:19:59.047 "psk": "key0", 00:19:59.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:59.047 "hdgst": false, 00:19:59.047 "ddgst": false, 00:19:59.047 "multipath": "multipath" 00:19:59.047 } 00:19:59.047 }, 00:19:59.047 { 00:19:59.047 "method": "bdev_nvme_set_hotplug", 00:19:59.047 "params": { 00:19:59.047 "period_us": 100000, 00:19:59.047 "enable": false 00:19:59.047 } 00:19:59.047 }, 00:19:59.047 { 00:19:59.047 "method": "bdev_wait_for_examine" 00:19:59.047 } 00:19:59.047 ] 00:19:59.047 }, 00:19:59.047 { 00:19:59.047 "subsystem": "nbd", 00:19:59.047 "config": [] 00:19:59.047 } 00:19:59.047 ] 00:19:59.047 }' 00:19:59.307 [2024-11-06 15:32:17.050499] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:19:59.307 [2024-11-06 15:32:17.050551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3797678 ] 00:19:59.307 [2024-11-06 15:32:17.139888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.307 [2024-11-06 15:32:17.175311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.568 [2024-11-06 15:32:17.316098] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:00.139 15:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:00.139 15:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:00.139 15:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:00.139 Running I/O for 10 seconds... 00:20:02.023 5428.00 IOPS, 21.20 MiB/s [2024-11-06T14:32:20.948Z] 4731.00 IOPS, 18.48 MiB/s [2024-11-06T14:32:22.332Z] 4774.67 IOPS, 18.65 MiB/s [2024-11-06T14:32:23.272Z] 4950.00 IOPS, 19.34 MiB/s [2024-11-06T14:32:24.214Z] 5008.20 IOPS, 19.56 MiB/s [2024-11-06T14:32:25.154Z] 5131.33 IOPS, 20.04 MiB/s [2024-11-06T14:32:26.095Z] 5078.29 IOPS, 19.84 MiB/s [2024-11-06T14:32:27.035Z] 5165.75 IOPS, 20.18 MiB/s [2024-11-06T14:32:27.975Z] 5217.44 IOPS, 20.38 MiB/s [2024-11-06T14:32:28.235Z] 5140.80 IOPS, 20.08 MiB/s 00:20:10.252 Latency(us) 00:20:10.252 [2024-11-06T14:32:28.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.252 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:10.252 Verification LBA range: start 0x0 length 0x2000 00:20:10.252 TLSTESTn1 : 10.02 5142.19 20.09 0.00 0.00 24850.16 5870.93 27525.12 00:20:10.252 [2024-11-06T14:32:28.235Z] =================================================================================================================== 00:20:10.252 [2024-11-06T14:32:28.235Z] Total : 5142.19 20.09 0.00 0.00 24850.16 5870.93 27525.12 00:20:10.252 { 00:20:10.252 "results": [ 00:20:10.252 { 00:20:10.252 "job": "TLSTESTn1", 00:20:10.252 "core_mask": "0x4", 00:20:10.252 "workload": "verify", 00:20:10.252 "status": "finished", 00:20:10.252 "verify_range": { 00:20:10.252 "start": 0, 00:20:10.252 "length": 8192 00:20:10.252 }, 00:20:10.252 "queue_depth": 128, 00:20:10.252 "io_size": 4096, 00:20:10.252 "runtime": 10.021794, 00:20:10.252 "iops": 5142.193104348383, 00:20:10.252 "mibps": 20.08669181386087, 00:20:10.252 "io_failed": 0, 00:20:10.252 "io_timeout": 0, 00:20:10.252 "avg_latency_us": 24850.164836159947, 00:20:10.252 "min_latency_us": 5870.933333333333, 00:20:10.252 "max_latency_us": 27525.12 00:20:10.252 } 00:20:10.252 ], 00:20:10.252 "core_count": 1 00:20:10.252 } 00:20:10.252 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:10.252 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3797678 00:20:10.252 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3797678 ']' 00:20:10.252 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3797678 00:20:10.253 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:10.253 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:10.253 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3797678 00:20:10.253 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:10.253 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:10.253 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3797678' 00:20:10.253 killing process with pid 3797678 00:20:10.253 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3797678 00:20:10.253 Received shutdown signal, test time was about 10.000000 seconds 00:20:10.253 00:20:10.253 Latency(us) 00:20:10.253 [2024-11-06T14:32:28.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.253 [2024-11-06T14:32:28.236Z] =================================================================================================================== 00:20:10.253 [2024-11-06T14:32:28.236Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.253 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3797678 00:20:10.253 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3797570 00:20:10.253 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3797570 ']' 00:20:10.253 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3797570 00:20:10.253 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:10.253 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:10.253 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3797570 00:20:10.511 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:10.511 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:10.511 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3797570' 00:20:10.511 killing process with pid 3797570 00:20:10.511 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3797570 00:20:10.511 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3797570 00:20:10.511 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:10.511 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:10.512 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:10.512 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.512 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3799939 00:20:10.512 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3799939 00:20:10.512 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:10.512 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3799939 ']' 00:20:10.512 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.512 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:10.512 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.512 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:10.512 15:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.512 [2024-11-06 15:32:28.407923] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:20:10.512 [2024-11-06 15:32:28.407978] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.772 [2024-11-06 15:32:28.503171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.772 [2024-11-06 15:32:28.545588] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.772 [2024-11-06 15:32:28.545634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.772 [2024-11-06 15:32:28.545642] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.772 [2024-11-06 15:32:28.545649] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.772 [2024-11-06 15:32:28.545655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.772 [2024-11-06 15:32:28.546383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.342 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:11.342 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:11.342 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:11.342 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:11.342 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.342 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.342 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.TqfbSgrgHa 00:20:11.342 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TqfbSgrgHa 00:20:11.342 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:11.603 [2024-11-06 15:32:29.442740] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.603 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:11.862 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:11.862 [2024-11-06 15:32:29.811683] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:11.862 [2024-11-06 15:32:29.812024] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.862 15:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:12.162 malloc0 00:20:12.162 15:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:12.457 15:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TqfbSgrgHa 00:20:12.458 15:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:12.746 15:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3800312 00:20:12.746 15:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:12.746 15:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:12.746 15:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3800312 /var/tmp/bdevperf.sock 00:20:12.746 15:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3800312 ']' 00:20:12.746 15:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.746 15:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:12.746 15:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.746 15:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:12.746 15:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.746 [2024-11-06 15:32:30.653105] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:20:12.746 [2024-11-06 15:32:30.653180] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3800312 ] 00:20:13.019 [2024-11-06 15:32:30.744248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.019 [2024-11-06 15:32:30.780663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.624 15:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:13.624 15:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:13.625 15:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TqfbSgrgHa 00:20:13.625 15:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:13.885 [2024-11-06 15:32:31.743998] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:13.885 nvme0n1 00:20:13.885 15:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:14.145 Running I/O for 1 seconds... 00:20:15.086 4370.00 IOPS, 17.07 MiB/s 00:20:15.086 Latency(us) 00:20:15.086 [2024-11-06T14:32:33.069Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.086 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:15.086 Verification LBA range: start 0x0 length 0x2000 00:20:15.086 nvme0n1 : 1.02 4427.13 17.29 0.00 0.00 28707.20 5789.01 62477.65 00:20:15.086 [2024-11-06T14:32:33.069Z] =================================================================================================================== 00:20:15.086 [2024-11-06T14:32:33.069Z] Total : 4427.13 17.29 0.00 0.00 28707.20 5789.01 62477.65 00:20:15.086 { 00:20:15.086 "results": [ 00:20:15.086 { 00:20:15.086 "job": "nvme0n1", 00:20:15.086 "core_mask": "0x2", 00:20:15.086 "workload": "verify", 00:20:15.086 "status": "finished", 00:20:15.086 "verify_range": { 00:20:15.086 "start": 0, 00:20:15.086 "length": 8192 00:20:15.086 }, 00:20:15.086 "queue_depth": 128, 00:20:15.086 "io_size": 4096, 00:20:15.086 "runtime": 1.016008, 00:20:15.086 "iops": 4427.130495035472, 00:20:15.086 "mibps": 17.293478496232314, 00:20:15.086 "io_failed": 0, 00:20:15.086 "io_timeout": 0, 00:20:15.086 "avg_latency_us": 28707.201802282492, 00:20:15.086 "min_latency_us": 5789.013333333333, 00:20:15.086 "max_latency_us": 62477.653333333335 00:20:15.086 } 00:20:15.086 ], 00:20:15.086 "core_count": 1 00:20:15.086 } 00:20:15.086 15:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3800312 00:20:15.086 15:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3800312 ']' 00:20:15.086 15:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3800312 00:20:15.086 15:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:15.086 15:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:15.086 15:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3800312 00:20:15.086 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:15.086 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:15.086 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3800312' 00:20:15.086 killing process with pid 3800312 00:20:15.086 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3800312 00:20:15.086 Received shutdown signal, test time was about 1.000000 seconds 00:20:15.086 00:20:15.086 Latency(us) 00:20:15.086 [2024-11-06T14:32:33.069Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.086 [2024-11-06T14:32:33.069Z] =================================================================================================================== 00:20:15.086 [2024-11-06T14:32:33.069Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.086 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3800312 00:20:15.347 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3799939 00:20:15.347 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3799939 ']' 00:20:15.347 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3799939 00:20:15.347 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:15.347 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:15.347 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3799939 00:20:15.347 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:15.347 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:15.347 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3799939' 00:20:15.347 killing process with pid 3799939 00:20:15.347 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3799939 00:20:15.347 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3799939 00:20:15.347 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:15.347 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:15.347 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:15.347 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.607 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3800997 00:20:15.607 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3800997 00:20:15.607 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:15.607 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3800997 ']' 00:20:15.607 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.607 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:15.607 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.607 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:15.607 15:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.607 [2024-11-06 15:32:33.387973] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:20:15.607 [2024-11-06 15:32:33.388030] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.607 [2024-11-06 15:32:33.482324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.607 [2024-11-06 15:32:33.532576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.607 [2024-11-06 15:32:33.532629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.608 [2024-11-06 15:32:33.532638] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.608 [2024-11-06 15:32:33.532645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.608 [2024-11-06 15:32:33.532651] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.608 [2024-11-06 15:32:33.533414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.549 15:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:16.549 15:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:16.549 15:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:16.549 15:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:16.549 15:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.549 15:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.549 15:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:16.549 15:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.549 15:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.549 [2024-11-06 15:32:34.241448] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.549 malloc0 00:20:16.549 [2024-11-06 15:32:34.271600] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:16.550 [2024-11-06 15:32:34.271943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.550 15:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.550 15:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3801052 00:20:16.550 15:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3801052 /var/tmp/bdevperf.sock 00:20:16.550 15:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:16.550 15:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3801052 ']' 00:20:16.550 15:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.550 15:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:16.550 15:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.550 15:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:16.550 15:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.550 [2024-11-06 15:32:34.365290] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:20:16.550 [2024-11-06 15:32:34.365355] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3801052 ] 00:20:16.550 [2024-11-06 15:32:34.454005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.550 [2024-11-06 15:32:34.488631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.490 15:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:17.490 15:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:17.490 15:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TqfbSgrgHa 00:20:17.490 15:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:17.751 [2024-11-06 15:32:35.499855] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:17.751 nvme0n1 00:20:17.751 15:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:17.751 Running I/O for 1 seconds... 00:20:19.135 5862.00 IOPS, 22.90 MiB/s 00:20:19.135 Latency(us) 00:20:19.135 [2024-11-06T14:32:37.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.135 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:19.135 Verification LBA range: start 0x0 length 0x2000 00:20:19.135 nvme0n1 : 1.01 5906.79 23.07 0.00 0.00 21522.26 5843.63 34078.72 00:20:19.135 [2024-11-06T14:32:37.118Z] =================================================================================================================== 00:20:19.135 [2024-11-06T14:32:37.118Z] Total : 5906.79 23.07 0.00 0.00 21522.26 5843.63 34078.72 00:20:19.135 { 00:20:19.135 "results": [ 00:20:19.135 { 00:20:19.135 "job": "nvme0n1", 00:20:19.135 "core_mask": "0x2", 00:20:19.135 "workload": "verify", 00:20:19.135 "status": "finished", 00:20:19.135 "verify_range": { 00:20:19.135 "start": 0, 00:20:19.135 "length": 8192 00:20:19.135 }, 00:20:19.135 "queue_depth": 128, 00:20:19.135 "io_size": 4096, 00:20:19.135 "runtime": 1.014257, 00:20:19.135 "iops": 5906.786938616149, 00:20:19.135 "mibps": 23.073386478969333, 00:20:19.135 "io_failed": 0, 00:20:19.135 "io_timeout": 0, 00:20:19.135 "avg_latency_us": 21522.259139820842, 00:20:19.135 "min_latency_us": 5843.626666666667, 00:20:19.135 "max_latency_us": 34078.72 00:20:19.135 } 00:20:19.135 ], 00:20:19.135 "core_count": 1 00:20:19.135 } 00:20:19.135 15:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:19.135 15:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.135 15:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.135 15:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.135 15:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:19.135 "subsystems": [ 00:20:19.135 { 00:20:19.135 "subsystem": "keyring", 00:20:19.135 "config": [ 00:20:19.135 { 00:20:19.135 "method": "keyring_file_add_key", 00:20:19.135 "params": { 00:20:19.135 "name": "key0", 00:20:19.135 "path": "/tmp/tmp.TqfbSgrgHa" 00:20:19.135 } 00:20:19.135 } 00:20:19.135 ] 00:20:19.135 }, 00:20:19.136 { 00:20:19.136 "subsystem": "iobuf", 00:20:19.136 "config": [ 00:20:19.136 { 00:20:19.136 "method": "iobuf_set_options", 00:20:19.136 "params": { 00:20:19.136 "small_pool_count": 8192, 00:20:19.136 "large_pool_count": 1024, 00:20:19.136 "small_bufsize": 8192, 00:20:19.136 "large_bufsize": 135168, 00:20:19.136 "enable_numa": false 00:20:19.136 } 00:20:19.136 } 00:20:19.136 ] 00:20:19.136 }, 00:20:19.136 { 00:20:19.136 "subsystem": "sock", 00:20:19.136 "config": [ 00:20:19.136 { 00:20:19.136 "method": "sock_set_default_impl", 00:20:19.136 "params": { 00:20:19.136 "impl_name": "posix" 00:20:19.136 } 00:20:19.136 }, 00:20:19.136 { 00:20:19.136 "method": "sock_impl_set_options", 00:20:19.136 "params": { 00:20:19.136 "impl_name": "ssl", 00:20:19.136 "recv_buf_size": 4096, 00:20:19.136 "send_buf_size": 4096, 00:20:19.136 "enable_recv_pipe": true, 00:20:19.136 "enable_quickack": false, 00:20:19.136 "enable_placement_id": 0, 00:20:19.136 "enable_zerocopy_send_server": true, 00:20:19.136 "enable_zerocopy_send_client": false, 00:20:19.136 "zerocopy_threshold": 0, 00:20:19.136 "tls_version": 0, 00:20:19.136 "enable_ktls": false 00:20:19.136 } 00:20:19.136 }, 00:20:19.136 { 00:20:19.136 "method": "sock_impl_set_options", 00:20:19.136 "params": { 00:20:19.136 "impl_name": "posix", 00:20:19.136 "recv_buf_size": 2097152, 00:20:19.136 "send_buf_size": 2097152, 00:20:19.136 "enable_recv_pipe": true, 00:20:19.136 "enable_quickack": false, 00:20:19.136 "enable_placement_id": 0, 00:20:19.136 "enable_zerocopy_send_server": true, 00:20:19.136 "enable_zerocopy_send_client": false, 00:20:19.136 "zerocopy_threshold": 0, 00:20:19.136 "tls_version": 0, 00:20:19.136 "enable_ktls": false 00:20:19.136 } 00:20:19.136 } 00:20:19.136 ] 00:20:19.136 }, 00:20:19.136 { 00:20:19.136 "subsystem": "vmd", 00:20:19.136 "config": [] 00:20:19.136 }, 00:20:19.136 { 00:20:19.136 "subsystem": "accel", 00:20:19.136 "config": [ 00:20:19.136 { 00:20:19.136 "method": "accel_set_options", 00:20:19.136 "params": { 00:20:19.136 "small_cache_size": 128, 00:20:19.136 "large_cache_size": 16, 00:20:19.136 "task_count": 2048, 00:20:19.136 "sequence_count": 2048, 00:20:19.136 "buf_count": 2048 00:20:19.136 } 00:20:19.136 } 00:20:19.136 ] 00:20:19.136 }, 00:20:19.136 { 00:20:19.136 "subsystem": "bdev", 00:20:19.136 "config": [ 00:20:19.136 { 00:20:19.136 "method": "bdev_set_options", 00:20:19.136 "params": { 00:20:19.136 "bdev_io_pool_size": 65535, 00:20:19.136 "bdev_io_cache_size": 256, 00:20:19.136 "bdev_auto_examine": true, 00:20:19.136 "iobuf_small_cache_size": 128, 00:20:19.136 "iobuf_large_cache_size": 16 00:20:19.136 } 00:20:19.136 }, 00:20:19.136 { 00:20:19.136 "method": "bdev_raid_set_options", 00:20:19.136 "params": { 00:20:19.136 "process_window_size_kb": 1024, 00:20:19.136 "process_max_bandwidth_mb_sec": 0 00:20:19.136 } 00:20:19.136 }, 00:20:19.136 { 00:20:19.136 "method": "bdev_iscsi_set_options", 00:20:19.136 "params": { 00:20:19.136 "timeout_sec": 30 00:20:19.136 } 00:20:19.136 }, 00:20:19.136 { 00:20:19.136 "method": "bdev_nvme_set_options", 00:20:19.136 "params": { 00:20:19.136 "action_on_timeout": "none", 00:20:19.136 "timeout_us": 0, 00:20:19.136 "timeout_admin_us": 0, 00:20:19.136 "keep_alive_timeout_ms": 10000, 00:20:19.136 "arbitration_burst": 0, 00:20:19.136 "low_priority_weight": 0, 00:20:19.136 "medium_priority_weight": 0, 00:20:19.136 "high_priority_weight": 0, 00:20:19.136 "nvme_adminq_poll_period_us": 10000, 00:20:19.136 "nvme_ioq_poll_period_us": 0, 00:20:19.136 "io_queue_requests": 0, 00:20:19.136 "delay_cmd_submit": true, 00:20:19.136 "transport_retry_count": 4, 00:20:19.136 "bdev_retry_count": 3, 00:20:19.136 "transport_ack_timeout": 0, 00:20:19.136 "ctrlr_loss_timeout_sec": 0, 00:20:19.136 "reconnect_delay_sec": 0, 00:20:19.136 "fast_io_fail_timeout_sec": 0, 00:20:19.136 "disable_auto_failback": false, 00:20:19.136 "generate_uuids": false, 00:20:19.136 "transport_tos": 0, 00:20:19.136 "nvme_error_stat": false, 00:20:19.136 "rdma_srq_size": 0, 00:20:19.136 "io_path_stat": false, 00:20:19.136 "allow_accel_sequence": false, 00:20:19.136 "rdma_max_cq_size": 0, 00:20:19.136 "rdma_cm_event_timeout_ms": 0, 00:20:19.136 "dhchap_digests": [ 00:20:19.136 "sha256", 00:20:19.136 "sha384", 00:20:19.136 "sha512" 00:20:19.136 ], 00:20:19.136 "dhchap_dhgroups": [ 00:20:19.136 "null", 00:20:19.136 "ffdhe2048", 00:20:19.136 "ffdhe3072", 00:20:19.136 "ffdhe4096", 00:20:19.136 "ffdhe6144", 00:20:19.136 "ffdhe8192" 00:20:19.136 ] 00:20:19.136 } 00:20:19.136 }, 00:20:19.136 { 00:20:19.136 "method": "bdev_nvme_set_hotplug", 00:20:19.136 "params": { 00:20:19.136 "period_us": 100000, 00:20:19.136 "enable": false 00:20:19.136 } 00:20:19.136 }, 00:20:19.136 { 00:20:19.136 "method": "bdev_malloc_create", 00:20:19.136 "params": { 00:20:19.136 "name": "malloc0", 00:20:19.136 "num_blocks": 8192, 00:20:19.136 "block_size": 4096, 00:20:19.136 "physical_block_size": 4096, 00:20:19.136 "uuid": "da4b343e-151a-4c62-95ea-f13feed6eeea", 00:20:19.136 "optimal_io_boundary": 0, 00:20:19.136 "md_size": 0, 00:20:19.136 "dif_type": 0, 00:20:19.136 "dif_is_head_of_md": false, 00:20:19.136 "dif_pi_format": 0 00:20:19.136 } 00:20:19.136 }, 00:20:19.136 { 00:20:19.136 "method": "bdev_wait_for_examine" 00:20:19.136 } 00:20:19.136 ] 00:20:19.136 }, 00:20:19.136 { 00:20:19.136 "subsystem": "nbd", 00:20:19.136 "config": [] 00:20:19.136 }, 00:20:19.136 { 00:20:19.136 "subsystem": "scheduler", 00:20:19.136 "config": [ 00:20:19.136 { 00:20:19.136 "method": "framework_set_scheduler", 00:20:19.136 "params": { 00:20:19.136 "name": "static" 00:20:19.136 } 00:20:19.136 } 00:20:19.136 ] 00:20:19.136 }, 00:20:19.136 { 00:20:19.136 "subsystem": "nvmf", 00:20:19.136 "config": [ 00:20:19.136 { 00:20:19.136 "method": "nvmf_set_config", 00:20:19.136 "params": { 00:20:19.136 "discovery_filter": "match_any", 00:20:19.136 "admin_cmd_passthru": { 00:20:19.136 "identify_ctrlr": false 00:20:19.136 }, 00:20:19.136 "dhchap_digests": [ 00:20:19.136 "sha256", 00:20:19.136 "sha384", 00:20:19.136 "sha512" 00:20:19.136 ], 00:20:19.136 "dhchap_dhgroups": [ 00:20:19.136 "null", 00:20:19.136 "ffdhe2048", 00:20:19.136 "ffdhe3072", 00:20:19.136 "ffdhe4096", 00:20:19.136 "ffdhe6144", 00:20:19.136 "ffdhe8192" 00:20:19.136 ] 00:20:19.136 } 00:20:19.136 }, 00:20:19.136 { 00:20:19.136 "method": "nvmf_set_max_subsystems", 00:20:19.136 "params": { 00:20:19.136 "max_subsystems": 1024 00:20:19.136 } 00:20:19.136 }, 00:20:19.136 { 00:20:19.136 "method": "nvmf_set_crdt", 00:20:19.136 "params": { 00:20:19.136 "crdt1": 0, 00:20:19.136 "crdt2": 0, 00:20:19.136 "crdt3": 0 00:20:19.136 } 00:20:19.136 }, 00:20:19.136 { 00:20:19.136 "method": "nvmf_create_transport", 00:20:19.136 "params": { 00:20:19.136 "trtype": "TCP", 00:20:19.136 "max_queue_depth": 128, 00:20:19.136 "max_io_qpairs_per_ctrlr": 127, 00:20:19.136 "in_capsule_data_size": 4096, 00:20:19.136 "max_io_size": 131072, 00:20:19.136 "io_unit_size": 131072, 00:20:19.136 "max_aq_depth": 128, 00:20:19.136 "num_shared_buffers": 511, 00:20:19.136 "buf_cache_size": 4294967295, 00:20:19.136 "dif_insert_or_strip": false, 00:20:19.136 "zcopy": false, 00:20:19.136 "c2h_success": false, 00:20:19.136 "sock_priority": 0, 00:20:19.136 "abort_timeout_sec": 1, 00:20:19.136 "ack_timeout": 0, 00:20:19.136 "data_wr_pool_size": 0 00:20:19.136 } 00:20:19.136 }, 00:20:19.136 { 00:20:19.136 "method": "nvmf_create_subsystem", 00:20:19.136 "params": { 00:20:19.136 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.136 "allow_any_host": false, 00:20:19.136 "serial_number": "00000000000000000000", 00:20:19.136 "model_number": "SPDK bdev Controller", 00:20:19.136 "max_namespaces": 32, 00:20:19.136 "min_cntlid": 1, 00:20:19.136 "max_cntlid": 65519, 00:20:19.136 "ana_reporting": false 00:20:19.136 } 00:20:19.136 }, 00:20:19.136 { 00:20:19.137 "method": "nvmf_subsystem_add_host", 00:20:19.137 "params": { 00:20:19.137 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.137 "host": "nqn.2016-06.io.spdk:host1", 00:20:19.137 "psk": "key0" 00:20:19.137 } 00:20:19.137 }, 00:20:19.137 { 00:20:19.137 "method": "nvmf_subsystem_add_ns", 00:20:19.137 "params": { 00:20:19.137 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.137 "namespace": { 00:20:19.137 "nsid": 1, 00:20:19.137 "bdev_name": "malloc0", 00:20:19.137 "nguid": "DA4B343E151A4C6295EAF13FEED6EEEA", 00:20:19.137 "uuid": "da4b343e-151a-4c62-95ea-f13feed6eeea", 00:20:19.137 "no_auto_visible": false 00:20:19.137 } 00:20:19.137 } 00:20:19.137 }, 00:20:19.137 { 00:20:19.137 "method": "nvmf_subsystem_add_listener", 00:20:19.137 "params": { 00:20:19.137 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.137 "listen_address": { 00:20:19.137 "trtype": "TCP", 00:20:19.137 "adrfam": "IPv4", 00:20:19.137 "traddr": "10.0.0.2", 00:20:19.137 "trsvcid": "4420" 00:20:19.137 }, 00:20:19.137 "secure_channel": false, 00:20:19.137 "sock_impl": "ssl" 00:20:19.137 } 00:20:19.137 } 00:20:19.137 ] 00:20:19.137 } 00:20:19.137 ] 00:20:19.137 }' 00:20:19.137 15:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:19.137 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:19.137 "subsystems": [ 00:20:19.137 { 00:20:19.137 "subsystem": "keyring", 00:20:19.137 "config": [ 00:20:19.137 { 00:20:19.137 "method": "keyring_file_add_key", 00:20:19.137 "params": { 00:20:19.137 "name": "key0", 00:20:19.137 "path": "/tmp/tmp.TqfbSgrgHa" 00:20:19.137 } 00:20:19.137 } 00:20:19.137 ] 00:20:19.137 }, 00:20:19.137 { 00:20:19.137 "subsystem": "iobuf", 00:20:19.137 "config": [ 00:20:19.137 { 00:20:19.137 "method": "iobuf_set_options", 00:20:19.137 "params": { 00:20:19.137 "small_pool_count": 8192, 00:20:19.137 "large_pool_count": 1024, 00:20:19.137 "small_bufsize": 8192, 00:20:19.137 "large_bufsize": 135168, 00:20:19.137 "enable_numa": false 00:20:19.137 } 00:20:19.137 } 00:20:19.137 ] 00:20:19.137 }, 00:20:19.137 { 00:20:19.137 "subsystem": "sock", 00:20:19.137 "config": [ 00:20:19.137 { 00:20:19.137 "method": "sock_set_default_impl", 00:20:19.137 "params": { 00:20:19.137 "impl_name": "posix" 00:20:19.137 } 00:20:19.137 }, 00:20:19.137 { 00:20:19.137 "method": "sock_impl_set_options", 00:20:19.137 "params": { 00:20:19.137 "impl_name": "ssl", 00:20:19.137 "recv_buf_size": 4096, 00:20:19.137 "send_buf_size": 4096, 00:20:19.137 "enable_recv_pipe": true, 00:20:19.137 "enable_quickack": false, 00:20:19.137 "enable_placement_id": 0, 00:20:19.137 "enable_zerocopy_send_server": true, 00:20:19.137 "enable_zerocopy_send_client": false, 00:20:19.137 "zerocopy_threshold": 0, 00:20:19.137 "tls_version": 0, 00:20:19.137 "enable_ktls": false 00:20:19.137 } 00:20:19.137 }, 00:20:19.137 { 00:20:19.137 "method": "sock_impl_set_options", 00:20:19.137 "params": { 00:20:19.137 "impl_name": "posix", 00:20:19.137 "recv_buf_size": 2097152, 00:20:19.137 "send_buf_size": 2097152, 00:20:19.137 "enable_recv_pipe": true, 00:20:19.137 "enable_quickack": false, 00:20:19.137 "enable_placement_id": 0, 00:20:19.137 "enable_zerocopy_send_server": true, 00:20:19.137 "enable_zerocopy_send_client": false, 00:20:19.137 "zerocopy_threshold": 0, 00:20:19.137 "tls_version": 0, 00:20:19.137 "enable_ktls": false 00:20:19.137 } 00:20:19.137 } 00:20:19.137 ] 00:20:19.137 }, 00:20:19.137 { 00:20:19.137 "subsystem": "vmd", 00:20:19.137 "config": [] 00:20:19.137 }, 00:20:19.137 { 00:20:19.137 "subsystem": "accel", 00:20:19.137 "config": [ 00:20:19.137 { 00:20:19.137 "method": "accel_set_options", 00:20:19.137 "params": { 00:20:19.137 "small_cache_size": 128, 00:20:19.137 "large_cache_size": 16, 00:20:19.137 "task_count": 2048, 00:20:19.137 "sequence_count": 2048, 00:20:19.137 "buf_count": 2048 00:20:19.137 } 00:20:19.137 } 00:20:19.137 ] 00:20:19.137 }, 00:20:19.137 { 00:20:19.137 "subsystem": "bdev", 00:20:19.137 "config": [ 00:20:19.137 { 00:20:19.137 "method": "bdev_set_options", 00:20:19.137 "params": { 00:20:19.137 "bdev_io_pool_size": 65535, 00:20:19.137 "bdev_io_cache_size": 256, 00:20:19.137 "bdev_auto_examine": true, 00:20:19.137 "iobuf_small_cache_size": 128, 00:20:19.137 "iobuf_large_cache_size": 16 00:20:19.137 } 00:20:19.137 }, 00:20:19.137 { 00:20:19.137 "method": "bdev_raid_set_options", 00:20:19.137 "params": { 00:20:19.137 "process_window_size_kb": 1024, 00:20:19.137 "process_max_bandwidth_mb_sec": 0 00:20:19.137 } 00:20:19.137 }, 00:20:19.137 { 00:20:19.137 "method": "bdev_iscsi_set_options", 00:20:19.137 "params": { 00:20:19.137 "timeout_sec": 30 00:20:19.137 } 00:20:19.137 }, 00:20:19.137 { 00:20:19.137 "method": "bdev_nvme_set_options", 00:20:19.137 "params": { 00:20:19.137 "action_on_timeout": "none", 00:20:19.137 "timeout_us": 0, 00:20:19.137 "timeout_admin_us": 0, 00:20:19.137 "keep_alive_timeout_ms": 10000, 00:20:19.137 "arbitration_burst": 0, 00:20:19.137 "low_priority_weight": 0, 00:20:19.137 "medium_priority_weight": 0, 00:20:19.137 "high_priority_weight": 0, 00:20:19.137 "nvme_adminq_poll_period_us": 10000, 00:20:19.137 "nvme_ioq_poll_period_us": 0, 00:20:19.137 "io_queue_requests": 512, 00:20:19.137 "delay_cmd_submit": true, 00:20:19.137 "transport_retry_count": 4, 00:20:19.137 "bdev_retry_count": 3, 00:20:19.137 "transport_ack_timeout": 0, 00:20:19.137 "ctrlr_loss_timeout_sec": 0, 00:20:19.137 "reconnect_delay_sec": 0, 00:20:19.137 "fast_io_fail_timeout_sec": 0, 00:20:19.137 "disable_auto_failback": false, 00:20:19.137 "generate_uuids": false, 00:20:19.137 "transport_tos": 0, 00:20:19.137 "nvme_error_stat": false, 00:20:19.137 "rdma_srq_size": 0, 00:20:19.137 "io_path_stat": false, 00:20:19.137 "allow_accel_sequence": false, 00:20:19.137 "rdma_max_cq_size": 0, 00:20:19.137 "rdma_cm_event_timeout_ms": 0, 00:20:19.137 "dhchap_digests": [ 00:20:19.137 "sha256", 00:20:19.137 "sha384", 00:20:19.137 "sha512" 00:20:19.137 ], 00:20:19.137 "dhchap_dhgroups": [ 00:20:19.137 "null", 00:20:19.137 "ffdhe2048", 00:20:19.137 "ffdhe3072", 00:20:19.137 "ffdhe4096", 00:20:19.137 "ffdhe6144", 00:20:19.137 "ffdhe8192" 00:20:19.137 ] 00:20:19.137 } 00:20:19.137 }, 00:20:19.137 { 00:20:19.137 "method": "bdev_nvme_attach_controller", 00:20:19.137 "params": { 00:20:19.137 "name": "nvme0", 00:20:19.137 "trtype": "TCP", 00:20:19.137 "adrfam": "IPv4", 00:20:19.137 "traddr": "10.0.0.2", 00:20:19.137 "trsvcid": "4420", 00:20:19.137 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.137 "prchk_reftag": false, 00:20:19.137 "prchk_guard": false, 00:20:19.137 "ctrlr_loss_timeout_sec": 0, 00:20:19.137 "reconnect_delay_sec": 0, 00:20:19.138 "fast_io_fail_timeout_sec": 0, 00:20:19.138 "psk": "key0", 00:20:19.138 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.138 "hdgst": false, 00:20:19.138 "ddgst": false, 00:20:19.138 "multipath": "multipath" 00:20:19.138 } 00:20:19.138 }, 00:20:19.138 { 00:20:19.138 "method": "bdev_nvme_set_hotplug", 00:20:19.138 "params": { 00:20:19.138 "period_us": 100000, 00:20:19.138 "enable": false 00:20:19.138 } 00:20:19.138 }, 00:20:19.138 { 00:20:19.138 "method": "bdev_enable_histogram", 00:20:19.138 "params": { 00:20:19.138 "name": "nvme0n1", 00:20:19.138 "enable": true 00:20:19.138 } 00:20:19.138 }, 00:20:19.138 { 00:20:19.138 "method": "bdev_wait_for_examine" 00:20:19.138 } 00:20:19.138 ] 00:20:19.138 }, 00:20:19.138 { 00:20:19.138 "subsystem": "nbd", 00:20:19.138 "config": [] 00:20:19.138 } 00:20:19.138 ] 00:20:19.138 }' 00:20:19.138 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3801052 00:20:19.138 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3801052 ']' 00:20:19.138 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3801052 00:20:19.138 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:19.138 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:19.138 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3801052 00:20:19.398 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:19.398 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:19.398 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3801052' 00:20:19.398 killing process with pid 3801052 00:20:19.398 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3801052 00:20:19.398 Received shutdown signal, test time was about 1.000000 seconds 00:20:19.398 00:20:19.398 Latency(us) 00:20:19.398 [2024-11-06T14:32:37.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.398 [2024-11-06T14:32:37.381Z] =================================================================================================================== 00:20:19.398 [2024-11-06T14:32:37.381Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.398 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3801052 00:20:19.398 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3800997 00:20:19.398 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3800997 ']' 00:20:19.398 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3800997 00:20:19.398 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:19.398 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:19.398 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3800997 00:20:19.398 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:19.398 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:19.398 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3800997' 00:20:19.398 killing process with pid 3800997 00:20:19.398 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3800997 00:20:19.398 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3800997 00:20:19.659 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:19.659 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:19.659 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:19.659 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.659 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:19.660 "subsystems": [ 00:20:19.660 { 00:20:19.660 "subsystem": "keyring", 00:20:19.660 "config": [ 00:20:19.660 { 00:20:19.660 "method": "keyring_file_add_key", 00:20:19.660 "params": { 00:20:19.660 "name": "key0", 00:20:19.660 "path": "/tmp/tmp.TqfbSgrgHa" 00:20:19.660 } 00:20:19.660 } 00:20:19.660 ] 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "subsystem": "iobuf", 00:20:19.660 "config": [ 00:20:19.660 { 00:20:19.660 "method": "iobuf_set_options", 00:20:19.660 "params": { 00:20:19.660 "small_pool_count": 8192, 00:20:19.660 "large_pool_count": 1024, 00:20:19.660 "small_bufsize": 8192, 00:20:19.660 "large_bufsize": 135168, 00:20:19.660 "enable_numa": false 00:20:19.660 } 00:20:19.660 } 00:20:19.660 ] 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "subsystem": "sock", 00:20:19.660 "config": [ 00:20:19.660 { 00:20:19.660 "method": "sock_set_default_impl", 00:20:19.660 "params": { 00:20:19.660 "impl_name": "posix" 00:20:19.660 } 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "method": "sock_impl_set_options", 00:20:19.660 "params": { 00:20:19.660 "impl_name": "ssl", 00:20:19.660 "recv_buf_size": 4096, 00:20:19.660 "send_buf_size": 4096, 00:20:19.660 "enable_recv_pipe": true, 00:20:19.660 "enable_quickack": false, 00:20:19.660 "enable_placement_id": 0, 00:20:19.660 "enable_zerocopy_send_server": true, 00:20:19.660 "enable_zerocopy_send_client": false, 00:20:19.660 "zerocopy_threshold": 0, 00:20:19.660 "tls_version": 0, 00:20:19.660 "enable_ktls": false 00:20:19.660 } 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "method": "sock_impl_set_options", 00:20:19.660 "params": { 00:20:19.660 "impl_name": "posix", 00:20:19.660 "recv_buf_size": 2097152, 00:20:19.660 "send_buf_size": 2097152, 00:20:19.660 "enable_recv_pipe": true, 00:20:19.660 "enable_quickack": false, 00:20:19.660 "enable_placement_id": 0, 00:20:19.660 "enable_zerocopy_send_server": true, 00:20:19.660 "enable_zerocopy_send_client": false, 00:20:19.660 "zerocopy_threshold": 0, 00:20:19.660 "tls_version": 0, 00:20:19.660 "enable_ktls": false 00:20:19.660 } 00:20:19.660 } 00:20:19.660 ] 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "subsystem": "vmd", 00:20:19.660 "config": [] 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "subsystem": "accel", 00:20:19.660 "config": [ 00:20:19.660 { 00:20:19.660 "method": "accel_set_options", 00:20:19.660 "params": { 00:20:19.660 "small_cache_size": 128, 00:20:19.660 "large_cache_size": 16, 00:20:19.660 "task_count": 2048, 00:20:19.660 "sequence_count": 2048, 00:20:19.660 "buf_count": 2048 00:20:19.660 } 00:20:19.660 } 00:20:19.660 ] 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "subsystem": "bdev", 00:20:19.660 "config": [ 00:20:19.660 { 00:20:19.660 "method": "bdev_set_options", 00:20:19.660 "params": { 00:20:19.660 "bdev_io_pool_size": 65535, 00:20:19.660 "bdev_io_cache_size": 256, 00:20:19.660 "bdev_auto_examine": true, 00:20:19.660 "iobuf_small_cache_size": 128, 00:20:19.660 "iobuf_large_cache_size": 16 00:20:19.660 } 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "method": "bdev_raid_set_options", 00:20:19.660 "params": { 00:20:19.660 "process_window_size_kb": 1024, 00:20:19.660 "process_max_bandwidth_mb_sec": 0 00:20:19.660 } 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "method": "bdev_iscsi_set_options", 00:20:19.660 "params": { 00:20:19.660 "timeout_sec": 30 00:20:19.660 } 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "method": "bdev_nvme_set_options", 00:20:19.660 "params": { 00:20:19.660 "action_on_timeout": "none", 00:20:19.660 "timeout_us": 0, 00:20:19.660 "timeout_admin_us": 0, 00:20:19.660 "keep_alive_timeout_ms": 10000, 00:20:19.660 "arbitration_burst": 0, 00:20:19.660 "low_priority_weight": 0, 00:20:19.660 "medium_priority_weight": 0, 00:20:19.660 "high_priority_weight": 0, 00:20:19.660 "nvme_adminq_poll_period_us": 10000, 00:20:19.660 "nvme_ioq_poll_period_us": 0, 00:20:19.660 "io_queue_requests": 0, 00:20:19.660 "delay_cmd_submit": true, 00:20:19.660 "transport_retry_count": 4, 00:20:19.660 "bdev_retry_count": 3, 00:20:19.660 "transport_ack_timeout": 0, 00:20:19.660 "ctrlr_loss_timeout_sec": 0, 00:20:19.660 "reconnect_delay_sec": 0, 00:20:19.660 "fast_io_fail_timeout_sec": 0, 00:20:19.660 "disable_auto_failback": false, 00:20:19.660 "generate_uuids": false, 00:20:19.660 "transport_tos": 0, 00:20:19.660 "nvme_error_stat": false, 00:20:19.660 "rdma_srq_size": 0, 00:20:19.660 "io_path_stat": false, 00:20:19.660 "allow_accel_sequence": false, 00:20:19.660 "rdma_max_cq_size": 0, 00:20:19.660 "rdma_cm_event_timeout_ms": 0, 00:20:19.660 "dhchap_digests": [ 00:20:19.660 "sha256", 00:20:19.660 "sha384", 00:20:19.660 "sha512" 00:20:19.660 ], 00:20:19.660 "dhchap_dhgroups": [ 00:20:19.660 "null", 00:20:19.660 "ffdhe2048", 00:20:19.660 "ffdhe3072", 00:20:19.660 "ffdhe4096", 00:20:19.660 "ffdhe6144", 00:20:19.660 "ffdhe8192" 00:20:19.660 ] 00:20:19.660 } 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "method": "bdev_nvme_set_hotplug", 00:20:19.660 "params": { 00:20:19.660 "period_us": 100000, 00:20:19.660 "enable": false 00:20:19.660 } 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "method": "bdev_malloc_create", 00:20:19.660 "params": { 00:20:19.660 "name": "malloc0", 00:20:19.660 "num_blocks": 8192, 00:20:19.660 "block_size": 4096, 00:20:19.660 "physical_block_size": 4096, 00:20:19.660 "uuid": "da4b343e-151a-4c62-95ea-f13feed6eeea", 00:20:19.660 "optimal_io_boundary": 0, 00:20:19.660 "md_size": 0, 00:20:19.660 "dif_type": 0, 00:20:19.660 "dif_is_head_of_md": false, 00:20:19.660 "dif_pi_format": 0 00:20:19.660 } 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "method": "bdev_wait_for_examine" 00:20:19.660 } 00:20:19.660 ] 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "subsystem": "nbd", 00:20:19.660 "config": [] 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "subsystem": "scheduler", 00:20:19.660 "config": [ 00:20:19.660 { 00:20:19.660 "method": "framework_set_scheduler", 00:20:19.660 "params": { 00:20:19.660 "name": "static" 00:20:19.660 } 00:20:19.660 } 00:20:19.660 ] 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "subsystem": "nvmf", 00:20:19.660 "config": [ 00:20:19.660 { 00:20:19.660 "method": "nvmf_set_config", 00:20:19.660 "params": { 00:20:19.660 "discovery_filter": "match_any", 00:20:19.660 "admin_cmd_passthru": { 00:20:19.660 "identify_ctrlr": false 00:20:19.660 }, 00:20:19.660 "dhchap_digests": [ 00:20:19.660 "sha256", 00:20:19.660 "sha384", 00:20:19.660 "sha512" 00:20:19.660 ], 00:20:19.660 "dhchap_dhgroups": [ 00:20:19.660 "null", 00:20:19.660 "ffdhe2048", 00:20:19.660 "ffdhe3072", 00:20:19.660 "ffdhe4096", 00:20:19.660 "ffdhe6144", 00:20:19.660 "ffdhe8192" 00:20:19.660 ] 00:20:19.660 } 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "method": "nvmf_set_max_subsystems", 00:20:19.660 "params": { 00:20:19.660 "max_subsystems": 1024 00:20:19.660 } 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "method": "nvmf_set_crdt", 00:20:19.660 "params": { 00:20:19.660 "crdt1": 0, 00:20:19.660 "crdt2": 0, 00:20:19.660 "crdt3": 0 00:20:19.660 } 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "method": "nvmf_create_transport", 00:20:19.660 "params": { 00:20:19.660 "trtype": "TCP", 00:20:19.660 "max_queue_depth": 128, 00:20:19.660 "max_io_qpairs_per_ctrlr": 127, 00:20:19.660 "in_capsule_data_size": 4096, 00:20:19.660 "max_io_size": 131072, 00:20:19.660 "io_unit_size": 131072, 00:20:19.660 "max_aq_depth": 128, 00:20:19.660 "num_shared_buffers": 511, 00:20:19.660 "buf_cache_size": 4294967295, 00:20:19.660 "dif_insert_or_strip": false, 00:20:19.660 "zcopy": false, 00:20:19.660 "c2h_success": false, 00:20:19.660 "sock_priority": 0, 00:20:19.660 "abort_timeout_sec": 1, 00:20:19.660 "ack_timeout": 0, 00:20:19.660 "data_wr_pool_size": 0 00:20:19.660 } 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "method": "nvmf_create_subsystem", 00:20:19.660 "params": { 00:20:19.660 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.660 "allow_any_host": false, 00:20:19.660 "serial_number": "00000000000000000000", 00:20:19.660 "model_number": "SPDK bdev Controller", 00:20:19.660 "max_namespaces": 32, 00:20:19.660 "min_cntlid": 1, 00:20:19.660 "max_cntlid": 65519, 00:20:19.660 "ana_reporting": false 00:20:19.660 } 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "method": "nvmf_subsystem_add_host", 00:20:19.660 "params": { 00:20:19.660 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.660 "host": "nqn.2016-06.io.spdk:host1", 00:20:19.660 "psk": "key0" 00:20:19.660 } 00:20:19.660 }, 00:20:19.660 { 00:20:19.660 "method": "nvmf_subsystem_add_ns", 00:20:19.660 "params": { 00:20:19.660 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.660 "namespace": { 00:20:19.660 "nsid": 1, 00:20:19.660 "bdev_name": "malloc0", 00:20:19.660 "nguid": "DA4B343E151A4C6295EAF13FEED6EEEA", 00:20:19.660 "uuid": "da4b343e-151a-4c62-95ea-f13feed6eeea", 00:20:19.661 "no_auto_visible": false 00:20:19.661 } 00:20:19.661 } 00:20:19.661 }, 00:20:19.661 { 00:20:19.661 "method": "nvmf_subsystem_add_listener", 00:20:19.661 "params": { 00:20:19.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.661 "listen_address": { 00:20:19.661 "trtype": "TCP", 00:20:19.661 "adrfam": "IPv4", 00:20:19.661 "traddr": "10.0.0.2", 00:20:19.661 "trsvcid": "4420" 00:20:19.661 }, 00:20:19.661 "secure_channel": false, 00:20:19.661 "sock_impl": "ssl" 00:20:19.661 } 00:20:19.661 } 00:20:19.661 ] 00:20:19.661 } 00:20:19.661 ] 00:20:19.661 }' 00:20:19.661 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3801714 00:20:19.661 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3801714 00:20:19.661 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:19.661 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3801714 ']' 00:20:19.661 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.661 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:19.661 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.661 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:19.661 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.661 [2024-11-06 15:32:37.481966] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:20:19.661 [2024-11-06 15:32:37.482019] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.661 [2024-11-06 15:32:37.572939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.661 [2024-11-06 15:32:37.603620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.661 [2024-11-06 15:32:37.603654] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.661 [2024-11-06 15:32:37.603660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.661 [2024-11-06 15:32:37.603665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.661 [2024-11-06 15:32:37.603669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.661 [2024-11-06 15:32:37.604220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.921 [2024-11-06 15:32:37.798758] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.921 [2024-11-06 15:32:37.830790] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:19.921 [2024-11-06 15:32:37.830977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.491 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:20.491 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:20.491 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:20.491 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:20.491 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.491 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.491 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3802029 00:20:20.491 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3802029 /var/tmp/bdevperf.sock 00:20:20.491 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3802029 ']' 00:20:20.491 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.491 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:20.491 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.491 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:20.491 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:20.491 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.491 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:20.491 "subsystems": [ 00:20:20.491 { 00:20:20.491 "subsystem": "keyring", 00:20:20.491 "config": [ 00:20:20.491 { 00:20:20.491 "method": "keyring_file_add_key", 00:20:20.491 "params": { 00:20:20.491 "name": "key0", 00:20:20.491 "path": "/tmp/tmp.TqfbSgrgHa" 00:20:20.491 } 00:20:20.491 } 00:20:20.491 ] 00:20:20.491 }, 00:20:20.491 { 00:20:20.491 "subsystem": "iobuf", 00:20:20.491 "config": [ 00:20:20.491 { 00:20:20.491 "method": "iobuf_set_options", 00:20:20.491 "params": { 00:20:20.491 "small_pool_count": 8192, 00:20:20.491 "large_pool_count": 1024, 00:20:20.491 "small_bufsize": 8192, 00:20:20.491 "large_bufsize": 135168, 00:20:20.491 "enable_numa": false 00:20:20.491 } 00:20:20.491 } 00:20:20.491 ] 00:20:20.491 }, 00:20:20.491 { 00:20:20.491 "subsystem": "sock", 00:20:20.491 "config": [ 00:20:20.491 { 00:20:20.491 "method": "sock_set_default_impl", 00:20:20.491 "params": { 00:20:20.492 "impl_name": "posix" 00:20:20.492 } 00:20:20.492 }, 00:20:20.492 { 00:20:20.492 "method": "sock_impl_set_options", 00:20:20.492 "params": { 00:20:20.492 "impl_name": "ssl", 00:20:20.492 "recv_buf_size": 4096, 00:20:20.492 "send_buf_size": 4096, 00:20:20.492 "enable_recv_pipe": true, 00:20:20.492 "enable_quickack": false, 00:20:20.492 "enable_placement_id": 0, 00:20:20.492 "enable_zerocopy_send_server": true, 00:20:20.492 "enable_zerocopy_send_client": false, 00:20:20.492 "zerocopy_threshold": 0, 00:20:20.492 "tls_version": 0, 00:20:20.492 "enable_ktls": false 00:20:20.492 } 00:20:20.492 }, 00:20:20.492 { 00:20:20.492 "method": "sock_impl_set_options", 00:20:20.492 "params": { 00:20:20.492 "impl_name": "posix", 00:20:20.492 "recv_buf_size": 2097152, 00:20:20.492 "send_buf_size": 2097152, 00:20:20.492 "enable_recv_pipe": true, 00:20:20.492 "enable_quickack": false, 00:20:20.492 "enable_placement_id": 0, 00:20:20.492 "enable_zerocopy_send_server": true, 00:20:20.492 "enable_zerocopy_send_client": false, 00:20:20.492 "zerocopy_threshold": 0, 00:20:20.492 "tls_version": 0, 00:20:20.492 "enable_ktls": false 00:20:20.492 } 00:20:20.492 } 00:20:20.492 ] 00:20:20.492 }, 00:20:20.492 { 00:20:20.492 "subsystem": "vmd", 00:20:20.492 "config": [] 00:20:20.492 }, 00:20:20.492 { 00:20:20.492 "subsystem": "accel", 00:20:20.492 "config": [ 00:20:20.492 { 00:20:20.492 "method": "accel_set_options", 00:20:20.492 "params": { 00:20:20.492 "small_cache_size": 128, 00:20:20.492 "large_cache_size": 16, 00:20:20.492 "task_count": 2048, 00:20:20.492 "sequence_count": 2048, 00:20:20.492 "buf_count": 2048 00:20:20.492 } 00:20:20.492 } 00:20:20.492 ] 00:20:20.492 }, 00:20:20.492 { 00:20:20.492 "subsystem": "bdev", 00:20:20.492 "config": [ 00:20:20.492 { 00:20:20.492 "method": "bdev_set_options", 00:20:20.492 "params": { 00:20:20.492 "bdev_io_pool_size": 65535, 00:20:20.492 "bdev_io_cache_size": 256, 00:20:20.492 "bdev_auto_examine": true, 00:20:20.492 "iobuf_small_cache_size": 128, 00:20:20.492 "iobuf_large_cache_size": 16 00:20:20.492 } 00:20:20.492 }, 00:20:20.492 { 00:20:20.492 "method": "bdev_raid_set_options", 00:20:20.492 "params": { 00:20:20.492 "process_window_size_kb": 1024, 00:20:20.492 "process_max_bandwidth_mb_sec": 0 00:20:20.492 } 00:20:20.492 }, 00:20:20.492 { 00:20:20.492 "method": "bdev_iscsi_set_options", 00:20:20.492 "params": { 00:20:20.492 "timeout_sec": 30 00:20:20.492 } 00:20:20.492 }, 00:20:20.492 { 00:20:20.492 "method": "bdev_nvme_set_options", 00:20:20.492 "params": { 00:20:20.492 "action_on_timeout": "none", 00:20:20.492 "timeout_us": 0, 00:20:20.492 "timeout_admin_us": 0, 00:20:20.492 "keep_alive_timeout_ms": 10000, 00:20:20.492 "arbitration_burst": 0, 00:20:20.492 "low_priority_weight": 0, 00:20:20.492 "medium_priority_weight": 0, 00:20:20.492 "high_priority_weight": 0, 00:20:20.492 "nvme_adminq_poll_period_us": 10000, 00:20:20.492 "nvme_ioq_poll_period_us": 0, 00:20:20.492 "io_queue_requests": 512, 00:20:20.492 "delay_cmd_submit": true, 00:20:20.492 "transport_retry_count": 4, 00:20:20.492 "bdev_retry_count": 3, 00:20:20.492 "transport_ack_timeout": 0, 00:20:20.492 "ctrlr_loss_timeout_sec": 0, 00:20:20.492 "reconnect_delay_sec": 0, 00:20:20.492 "fast_io_fail_timeout_sec": 0, 00:20:20.492 "disable_auto_failback": false, 00:20:20.492 "generate_uuids": false, 00:20:20.492 "transport_tos": 0, 00:20:20.492 "nvme_error_stat": false, 00:20:20.492 "rdma_srq_size": 0, 00:20:20.492 "io_path_stat": false, 00:20:20.492 "allow_accel_sequence": false, 00:20:20.492 "rdma_max_cq_size": 0, 00:20:20.492 "rdma_cm_event_timeout_ms": 0, 00:20:20.492 "dhchap_digests": [ 00:20:20.492 "sha256", 00:20:20.492 "sha384", 00:20:20.492 "sha512" 00:20:20.492 ], 00:20:20.492 "dhchap_dhgroups": [ 00:20:20.492 "null", 00:20:20.492 "ffdhe2048", 00:20:20.492 "ffdhe3072", 00:20:20.492 "ffdhe4096", 00:20:20.492 "ffdhe6144", 00:20:20.492 "ffdhe8192" 00:20:20.492 ] 00:20:20.492 } 00:20:20.492 }, 00:20:20.492 { 00:20:20.492 "method": "bdev_nvme_attach_controller", 00:20:20.492 "params": { 00:20:20.492 "name": "nvme0", 00:20:20.492 "trtype": "TCP", 00:20:20.492 "adrfam": "IPv4", 00:20:20.492 "traddr": "10.0.0.2", 00:20:20.492 "trsvcid": "4420", 00:20:20.492 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.492 "prchk_reftag": false, 00:20:20.492 "prchk_guard": false, 00:20:20.492 "ctrlr_loss_timeout_sec": 0, 00:20:20.492 "reconnect_delay_sec": 0, 00:20:20.492 "fast_io_fail_timeout_sec": 0, 00:20:20.492 "psk": "key0", 00:20:20.492 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.492 "hdgst": false, 00:20:20.492 "ddgst": false, 00:20:20.492 "multipath": "multipath" 00:20:20.492 } 00:20:20.492 }, 00:20:20.492 { 00:20:20.492 "method": "bdev_nvme_set_hotplug", 00:20:20.492 "params": { 00:20:20.492 "period_us": 100000, 00:20:20.492 "enable": false 00:20:20.492 } 00:20:20.492 }, 00:20:20.492 { 00:20:20.492 "method": "bdev_enable_histogram", 00:20:20.492 "params": { 00:20:20.492 "name": "nvme0n1", 00:20:20.492 "enable": true 00:20:20.492 } 00:20:20.492 }, 00:20:20.492 { 00:20:20.492 "method": "bdev_wait_for_examine" 00:20:20.492 } 00:20:20.492 ] 00:20:20.492 }, 00:20:20.492 { 00:20:20.492 "subsystem": "nbd", 00:20:20.492 "config": [] 00:20:20.492 } 00:20:20.492 ] 00:20:20.492 }' 00:20:20.492 [2024-11-06 15:32:38.378176] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:20:20.492 [2024-11-06 15:32:38.378232] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3802029 ] 00:20:20.492 [2024-11-06 15:32:38.461521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.752 [2024-11-06 15:32:38.491212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.752 [2024-11-06 15:32:38.627316] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.322 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:21.322 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:21.322 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:21.322 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:21.582 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.582 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:21.582 Running I/O for 1 seconds... 00:20:22.522 5512.00 IOPS, 21.53 MiB/s 00:20:22.522 Latency(us) 00:20:22.522 [2024-11-06T14:32:40.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.522 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:22.522 Verification LBA range: start 0x0 length 0x2000 00:20:22.522 nvme0n1 : 1.02 5530.52 21.60 0.00 0.00 22948.76 6280.53 34734.08 00:20:22.522 [2024-11-06T14:32:40.505Z] =================================================================================================================== 00:20:22.522 [2024-11-06T14:32:40.505Z] Total : 5530.52 21.60 0.00 0.00 22948.76 6280.53 34734.08 00:20:22.522 { 00:20:22.522 "results": [ 00:20:22.522 { 00:20:22.522 "job": "nvme0n1", 00:20:22.522 "core_mask": "0x2", 00:20:22.522 "workload": "verify", 00:20:22.522 "status": "finished", 00:20:22.522 "verify_range": { 00:20:22.522 "start": 0, 00:20:22.522 "length": 8192 00:20:22.522 }, 00:20:22.522 "queue_depth": 128, 00:20:22.522 "io_size": 4096, 00:20:22.522 "runtime": 1.019795, 00:20:22.522 "iops": 5530.52329144583, 00:20:22.522 "mibps": 21.603606607210274, 00:20:22.522 "io_failed": 0, 00:20:22.522 "io_timeout": 0, 00:20:22.522 "avg_latency_us": 22948.764747044916, 00:20:22.523 "min_latency_us": 6280.533333333334, 00:20:22.523 "max_latency_us": 34734.08 00:20:22.523 } 00:20:22.523 ], 00:20:22.523 "core_count": 1 00:20:22.523 } 00:20:22.523 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:22.523 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:22.523 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:22.523 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:20:22.523 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:20:22.523 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:22.783 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:22.783 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:22.783 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:22.783 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:22.783 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:22.783 nvmf_trace.0 00:20:22.783 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:20:22.783 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3802029 00:20:22.783 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3802029 ']' 00:20:22.783 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3802029 00:20:22.783 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:22.783 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:22.783 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3802029 00:20:22.783 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:22.783 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:22.783 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3802029' 00:20:22.783 killing process with pid 3802029 00:20:22.783 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3802029 00:20:22.783 Received shutdown signal, test time was about 1.000000 seconds 00:20:22.783 00:20:22.783 Latency(us) 00:20:22.783 [2024-11-06T14:32:40.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.783 [2024-11-06T14:32:40.766Z] =================================================================================================================== 00:20:22.783 [2024-11-06T14:32:40.766Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:22.783 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3802029 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:23.043 rmmod nvme_tcp 00:20:23.043 rmmod nvme_fabrics 00:20:23.043 rmmod nvme_keyring 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3801714 ']' 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3801714 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3801714 ']' 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3801714 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3801714 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3801714' 00:20:23.043 killing process with pid 3801714 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3801714 00:20:23.043 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3801714 00:20:23.043 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:23.043 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:23.043 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:23.043 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:23.043 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:23.043 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:23.043 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:23.303 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:23.303 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:23.303 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.303 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.303 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.220 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:25.220 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.rrviJjvckA /tmp/tmp.JZlLRYCxVz /tmp/tmp.TqfbSgrgHa 00:20:25.220 00:20:25.220 real 1m28.898s 00:20:25.220 user 2m20.346s 00:20:25.220 sys 0m27.544s 00:20:25.220 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:25.220 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.220 ************************************ 00:20:25.220 END TEST nvmf_tls 00:20:25.220 ************************************ 00:20:25.220 15:32:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:25.220 15:32:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:25.220 15:32:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:25.220 15:32:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:25.220 ************************************ 00:20:25.220 START TEST nvmf_fips 00:20:25.220 ************************************ 00:20:25.220 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:25.487 * Looking for test storage... 00:20:25.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:25.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.487 --rc genhtml_branch_coverage=1 00:20:25.487 --rc genhtml_function_coverage=1 00:20:25.487 --rc genhtml_legend=1 00:20:25.487 --rc geninfo_all_blocks=1 00:20:25.487 --rc geninfo_unexecuted_blocks=1 00:20:25.487 00:20:25.487 ' 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:25.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.487 --rc genhtml_branch_coverage=1 00:20:25.487 --rc genhtml_function_coverage=1 00:20:25.487 --rc genhtml_legend=1 00:20:25.487 --rc geninfo_all_blocks=1 00:20:25.487 --rc geninfo_unexecuted_blocks=1 00:20:25.487 00:20:25.487 ' 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:25.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.487 --rc genhtml_branch_coverage=1 00:20:25.487 --rc genhtml_function_coverage=1 00:20:25.487 --rc genhtml_legend=1 00:20:25.487 --rc geninfo_all_blocks=1 00:20:25.487 --rc geninfo_unexecuted_blocks=1 00:20:25.487 00:20:25.487 ' 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:25.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.487 --rc genhtml_branch_coverage=1 00:20:25.487 --rc genhtml_function_coverage=1 00:20:25.487 --rc genhtml_legend=1 00:20:25.487 --rc geninfo_all_blocks=1 00:20:25.487 --rc geninfo_unexecuted_blocks=1 00:20:25.487 00:20:25.487 ' 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:25.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:25.487 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:25.488 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:25.749 Error setting digest 00:20:25.749 4032ACD8827F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:25.749 4032ACD8827F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:25.749 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.750 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.750 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.750 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:25.750 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:25.750 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:25.750 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:33.886 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:33.886 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:33.886 Found net devices under 0000:31:00.0: cvl_0_0 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:33.886 Found net devices under 0000:31:00.1: cvl_0_1 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:33.886 15:32:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:33.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:33.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:20:33.886 00:20:33.886 --- 10.0.0.2 ping statistics --- 00:20:33.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.886 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:33.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:33.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:20:33.886 00:20:33.886 --- 10.0.0.1 ping statistics --- 00:20:33.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.886 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3806795 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3806795 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3806795 ']' 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:33.886 15:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:33.886 [2024-11-06 15:32:51.255851] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:20:33.886 [2024-11-06 15:32:51.255924] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.886 [2024-11-06 15:32:51.357283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.886 [2024-11-06 15:32:51.406828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.886 [2024-11-06 15:32:51.406880] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.886 [2024-11-06 15:32:51.406889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.886 [2024-11-06 15:32:51.406896] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.886 [2024-11-06 15:32:51.406902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.886 [2024-11-06 15:32:51.407699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.146 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:34.146 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:34.146 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:34.146 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:34.147 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:34.147 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.147 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:34.147 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:34.147 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:34.147 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.sqw 00:20:34.147 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:34.147 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.sqw 00:20:34.147 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.sqw 00:20:34.147 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.sqw 00:20:34.147 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:34.407 [2024-11-06 15:32:52.291638] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.407 [2024-11-06 15:32:52.307640] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:34.407 [2024-11-06 15:32:52.307957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.407 malloc0 00:20:34.407 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.407 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3806996 00:20:34.407 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3806996 /var/tmp/bdevperf.sock 00:20:34.407 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:34.407 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3806996 ']' 00:20:34.407 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.407 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:34.407 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.407 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:34.407 15:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:34.668 [2024-11-06 15:32:52.454263] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:20:34.668 [2024-11-06 15:32:52.454335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3806996 ] 00:20:34.668 [2024-11-06 15:32:52.547676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.668 [2024-11-06 15:32:52.598641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.609 15:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:35.609 15:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:35.609 15:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.sqw 00:20:35.609 15:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:35.869 [2024-11-06 15:32:53.634874] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.869 TLSTESTn1 00:20:35.869 15:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:35.869 Running I/O for 10 seconds... 00:20:38.193 3795.00 IOPS, 14.82 MiB/s [2024-11-06T14:32:57.115Z] 4405.50 IOPS, 17.21 MiB/s [2024-11-06T14:32:58.055Z] 4435.00 IOPS, 17.32 MiB/s [2024-11-06T14:32:58.996Z] 4833.75 IOPS, 18.88 MiB/s [2024-11-06T14:32:59.936Z] 4888.00 IOPS, 19.09 MiB/s [2024-11-06T14:33:00.874Z] 4897.83 IOPS, 19.13 MiB/s [2024-11-06T14:33:02.257Z] 4929.14 IOPS, 19.25 MiB/s [2024-11-06T14:33:03.197Z] 5034.12 IOPS, 19.66 MiB/s [2024-11-06T14:33:04.165Z] 5061.44 IOPS, 19.77 MiB/s [2024-11-06T14:33:04.165Z] 5067.70 IOPS, 19.80 MiB/s 00:20:46.182 Latency(us) 00:20:46.182 [2024-11-06T14:33:04.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.182 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:46.182 Verification LBA range: start 0x0 length 0x2000 00:20:46.182 TLSTESTn1 : 10.05 5057.59 19.76 0.00 0.00 25237.18 6280.53 43909.12 00:20:46.182 [2024-11-06T14:33:04.165Z] =================================================================================================================== 00:20:46.182 [2024-11-06T14:33:04.165Z] Total : 5057.59 19.76 0.00 0.00 25237.18 6280.53 43909.12 00:20:46.182 { 00:20:46.182 "results": [ 00:20:46.182 { 00:20:46.183 "job": "TLSTESTn1", 00:20:46.183 "core_mask": "0x4", 00:20:46.183 "workload": "verify", 00:20:46.183 "status": "finished", 00:20:46.183 "verify_range": { 00:20:46.183 "start": 0, 00:20:46.183 "length": 8192 00:20:46.183 }, 00:20:46.183 "queue_depth": 128, 00:20:46.183 "io_size": 4096, 00:20:46.183 "runtime": 10.0451, 00:20:46.183 "iops": 5057.590267891808, 00:20:46.183 "mibps": 19.756211983952376, 00:20:46.183 "io_failed": 0, 00:20:46.183 "io_timeout": 0, 00:20:46.183 "avg_latency_us": 25237.183709419205, 00:20:46.183 "min_latency_us": 6280.533333333334, 00:20:46.183 "max_latency_us": 43909.12 00:20:46.183 } 00:20:46.183 ], 00:20:46.183 "core_count": 1 00:20:46.183 } 00:20:46.183 15:33:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:46.183 15:33:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:46.183 15:33:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:20:46.183 15:33:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:20:46.183 15:33:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:46.183 15:33:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:46.183 15:33:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:46.183 15:33:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:46.183 15:33:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:46.183 15:33:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:46.183 nvmf_trace.0 00:20:46.183 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:20:46.183 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3806996 00:20:46.183 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3806996 ']' 00:20:46.183 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3806996 00:20:46.183 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:20:46.183 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:46.183 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3806996 00:20:46.183 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:46.183 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:46.183 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3806996' 00:20:46.183 killing process with pid 3806996 00:20:46.183 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3806996 00:20:46.183 Received shutdown signal, test time was about 10.000000 seconds 00:20:46.183 00:20:46.183 Latency(us) 00:20:46.183 [2024-11-06T14:33:04.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.183 [2024-11-06T14:33:04.166Z] =================================================================================================================== 00:20:46.183 [2024-11-06T14:33:04.166Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:46.183 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3806996 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:46.443 rmmod nvme_tcp 00:20:46.443 rmmod nvme_fabrics 00:20:46.443 rmmod nvme_keyring 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3806795 ']' 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3806795 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3806795 ']' 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3806795 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3806795 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3806795' 00:20:46.443 killing process with pid 3806795 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3806795 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3806795 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:46.443 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:46.704 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:46.704 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:46.704 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.704 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.704 15:33:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.616 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:48.616 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.sqw 00:20:48.616 00:20:48.616 real 0m23.320s 00:20:48.616 user 0m24.915s 00:20:48.616 sys 0m9.782s 00:20:48.616 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:48.616 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:48.616 ************************************ 00:20:48.616 END TEST nvmf_fips 00:20:48.616 ************************************ 00:20:48.616 15:33:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:48.616 15:33:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:48.616 15:33:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:48.616 15:33:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:48.616 ************************************ 00:20:48.616 START TEST nvmf_control_msg_list 00:20:48.616 ************************************ 00:20:48.616 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:48.878 * Looking for test storage... 00:20:48.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:48.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.878 --rc genhtml_branch_coverage=1 00:20:48.878 --rc genhtml_function_coverage=1 00:20:48.878 --rc genhtml_legend=1 00:20:48.878 --rc geninfo_all_blocks=1 00:20:48.878 --rc geninfo_unexecuted_blocks=1 00:20:48.878 00:20:48.878 ' 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:48.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.878 --rc genhtml_branch_coverage=1 00:20:48.878 --rc genhtml_function_coverage=1 00:20:48.878 --rc genhtml_legend=1 00:20:48.878 --rc geninfo_all_blocks=1 00:20:48.878 --rc geninfo_unexecuted_blocks=1 00:20:48.878 00:20:48.878 ' 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:48.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.878 --rc genhtml_branch_coverage=1 00:20:48.878 --rc genhtml_function_coverage=1 00:20:48.878 --rc genhtml_legend=1 00:20:48.878 --rc geninfo_all_blocks=1 00:20:48.878 --rc geninfo_unexecuted_blocks=1 00:20:48.878 00:20:48.878 ' 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:48.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.878 --rc genhtml_branch_coverage=1 00:20:48.878 --rc genhtml_function_coverage=1 00:20:48.878 --rc genhtml_legend=1 00:20:48.878 --rc geninfo_all_blocks=1 00:20:48.878 --rc geninfo_unexecuted_blocks=1 00:20:48.878 00:20:48.878 ' 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:48.878 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:48.879 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.879 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.879 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:48.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:48.879 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:48.879 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:48.879 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:48.879 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:48.879 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:48.879 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:48.879 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:48.879 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:48.879 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:48.879 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.879 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.879 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.879 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:48.879 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:48.879 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:48.879 15:33:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:57.085 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:57.085 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.085 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:57.085 Found net devices under 0000:31:00.0: cvl_0_0 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:57.086 Found net devices under 0000:31:00.1: cvl_0_1 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:57.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:57.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:20:57.086 00:20:57.086 --- 10.0.0.2 ping statistics --- 00:20:57.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.086 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:57.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:57.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:20:57.086 00:20:57.086 --- 10.0.0.1 ping statistics --- 00:20:57.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.086 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3814110 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3814110 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 3814110 ']' 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:57.086 15:33:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:57.086 [2024-11-06 15:33:14.606553] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:20:57.086 [2024-11-06 15:33:14.606644] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.086 [2024-11-06 15:33:14.698042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.086 [2024-11-06 15:33:14.748919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.086 [2024-11-06 15:33:14.748988] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.086 [2024-11-06 15:33:14.748996] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.086 [2024-11-06 15:33:14.749003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.086 [2024-11-06 15:33:14.749010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.086 [2024-11-06 15:33:14.749817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:57.662 [2024-11-06 15:33:15.481292] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:57.662 Malloc0 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:57.662 [2024-11-06 15:33:15.535841] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3814312 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3814313 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3814314 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3814312 00:20:57.662 15:33:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:57.923 [2024-11-06 15:33:15.646806] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:57.923 [2024-11-06 15:33:15.647099] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:57.923 [2024-11-06 15:33:15.647401] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:58.865 Initializing NVMe Controllers 00:20:58.865 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:58.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:58.865 Initialization complete. Launching workers. 00:20:58.865 ======================================================== 00:20:58.865 Latency(us) 00:20:58.865 Device Information : IOPS MiB/s Average min max 00:20:58.865 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40919.11 40776.86 41303.24 00:20:58.865 ======================================================== 00:20:58.865 Total : 25.00 0.10 40919.11 40776.86 41303.24 00:20:58.865 00:20:58.865 15:33:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3814313 00:20:58.865 Initializing NVMe Controllers 00:20:58.865 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:58.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:58.865 Initialization complete. Launching workers. 00:20:58.865 ======================================================== 00:20:58.865 Latency(us) 00:20:58.865 Device Information : IOPS MiB/s Average min max 00:20:58.865 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1495.00 5.84 669.00 201.04 817.67 00:20:58.865 ======================================================== 00:20:58.865 Total : 1495.00 5.84 669.00 201.04 817.67 00:20:58.865 00:20:59.125 Initializing NVMe Controllers 00:20:59.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:59.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:59.125 Initialization complete. Launching workers. 00:20:59.125 ======================================================== 00:20:59.125 Latency(us) 00:20:59.125 Device Information : IOPS MiB/s Average min max 00:20:59.125 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 24.00 0.09 41773.35 40861.78 41944.52 00:20:59.125 ======================================================== 00:20:59.125 Total : 24.00 0.09 41773.35 40861.78 41944.52 00:20:59.125 00:20:59.125 15:33:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3814314 00:20:59.125 15:33:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:59.125 15:33:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:59.125 15:33:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:59.125 15:33:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:59.125 15:33:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:59.125 15:33:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:59.125 15:33:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:59.125 15:33:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:59.125 rmmod nvme_tcp 00:20:59.125 rmmod nvme_fabrics 00:20:59.125 rmmod nvme_keyring 00:20:59.125 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:59.125 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:59.125 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:59.125 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3814110 ']' 00:20:59.125 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3814110 00:20:59.125 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 3814110 ']' 00:20:59.125 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 3814110 00:20:59.125 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:20:59.125 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:59.125 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3814110 00:20:59.125 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:59.125 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:59.125 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3814110' 00:20:59.125 killing process with pid 3814110 00:20:59.125 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 3814110 00:20:59.125 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 3814110 00:20:59.386 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:59.386 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:59.386 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:59.386 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:59.387 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:59.387 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:59.387 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:59.387 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:59.387 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:59.387 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.387 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:59.387 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:01.930 00:21:01.930 real 0m12.747s 00:21:01.930 user 0m8.290s 00:21:01.930 sys 0m6.722s 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.930 ************************************ 00:21:01.930 END TEST nvmf_control_msg_list 00:21:01.930 ************************************ 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:01.930 ************************************ 00:21:01.930 START TEST nvmf_wait_for_buf 00:21:01.930 ************************************ 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:01.930 * Looking for test storage... 00:21:01.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:01.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.930 --rc genhtml_branch_coverage=1 00:21:01.930 --rc genhtml_function_coverage=1 00:21:01.930 --rc genhtml_legend=1 00:21:01.930 --rc geninfo_all_blocks=1 00:21:01.930 --rc geninfo_unexecuted_blocks=1 00:21:01.930 00:21:01.930 ' 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:01.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.930 --rc genhtml_branch_coverage=1 00:21:01.930 --rc genhtml_function_coverage=1 00:21:01.930 --rc genhtml_legend=1 00:21:01.930 --rc geninfo_all_blocks=1 00:21:01.930 --rc geninfo_unexecuted_blocks=1 00:21:01.930 00:21:01.930 ' 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:01.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.930 --rc genhtml_branch_coverage=1 00:21:01.930 --rc genhtml_function_coverage=1 00:21:01.930 --rc genhtml_legend=1 00:21:01.930 --rc geninfo_all_blocks=1 00:21:01.930 --rc geninfo_unexecuted_blocks=1 00:21:01.930 00:21:01.930 ' 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:01.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.930 --rc genhtml_branch_coverage=1 00:21:01.930 --rc genhtml_function_coverage=1 00:21:01.930 --rc genhtml_legend=1 00:21:01.930 --rc geninfo_all_blocks=1 00:21:01.930 --rc geninfo_unexecuted_blocks=1 00:21:01.930 00:21:01.930 ' 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:01.930 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:01.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:01.931 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:10.071 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:10.071 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:10.071 Found net devices under 0000:31:00.0: cvl_0_0 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.071 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:10.072 Found net devices under 0000:31:00.1: cvl_0_1 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.072 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:10.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:21:10.072 00:21:10.072 --- 10.0.0.2 ping statistics --- 00:21:10.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.072 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:21:10.072 00:21:10.072 --- 10.0.0.1 ping statistics --- 00:21:10.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.072 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3818836 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3818836 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 3818836 ']' 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:10.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:10.072 [2024-11-06 15:33:27.376973] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:21:10.072 [2024-11-06 15:33:27.377038] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.072 [2024-11-06 15:33:27.479102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.072 [2024-11-06 15:33:27.529823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.072 [2024-11-06 15:33:27.529876] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.072 [2024-11-06 15:33:27.529884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.072 [2024-11-06 15:33:27.529892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.072 [2024-11-06 15:33:27.529898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.072 [2024-11-06 15:33:27.530692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.334 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:10.334 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:21:10.334 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:10.334 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:10.334 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:10.334 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.334 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:10.334 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:10.334 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:10.334 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.334 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:10.334 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.334 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:10.334 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.334 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:10.334 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.334 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:10.334 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.334 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:10.595 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.595 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:10.595 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.595 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:10.595 Malloc0 00:21:10.595 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.595 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:10.595 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.595 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:10.595 [2024-11-06 15:33:28.368907] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.595 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.595 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:10.595 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.595 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:10.595 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.595 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:10.595 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.595 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:10.595 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.595 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:10.595 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.595 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:10.595 [2024-11-06 15:33:28.405220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.595 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.596 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:10.596 [2024-11-06 15:33:28.508865] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:11.982 Initializing NVMe Controllers 00:21:11.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:11.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:11.982 Initialization complete. Launching workers. 00:21:11.982 ======================================================== 00:21:11.982 Latency(us) 00:21:11.982 Device Information : IOPS MiB/s Average min max 00:21:11.982 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33597.43 8019.03 71830.06 00:21:11.982 ======================================================== 00:21:11.982 Total : 124.00 15.50 33597.43 8019.03 71830.06 00:21:11.982 00:21:11.982 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:11.982 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:11.982 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.982 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:11.982 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.243 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:21:12.243 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:21:12.243 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:12.243 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:12.243 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:12.243 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:12.243 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:12.243 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:12.243 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:12.243 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:12.243 rmmod nvme_tcp 00:21:12.243 rmmod nvme_fabrics 00:21:12.243 rmmod nvme_keyring 00:21:12.243 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:12.243 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:12.243 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:12.243 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3818836 ']' 00:21:12.243 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3818836 00:21:12.243 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 3818836 ']' 00:21:12.243 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 3818836 00:21:12.243 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:21:12.243 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:12.243 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3818836 00:21:12.243 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:12.243 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:12.243 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3818836' 00:21:12.243 killing process with pid 3818836 00:21:12.243 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 3818836 00:21:12.243 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 3818836 00:21:12.512 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:12.512 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:12.512 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:12.512 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:12.512 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:12.512 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:12.512 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:12.512 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:12.512 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:12.512 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.512 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:12.512 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.426 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:14.426 00:21:14.426 real 0m12.936s 00:21:14.426 user 0m5.215s 00:21:14.426 sys 0m6.294s 00:21:14.426 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:14.426 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.426 ************************************ 00:21:14.426 END TEST nvmf_wait_for_buf 00:21:14.426 ************************************ 00:21:14.426 15:33:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:14.426 15:33:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:14.426 15:33:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:14.426 15:33:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:14.426 15:33:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:14.426 15:33:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:22.566 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:22.567 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:22.567 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:22.567 Found net devices under 0000:31:00.0: cvl_0_0 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:22.567 Found net devices under 0000:31:00.1: cvl_0_1 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:22.567 ************************************ 00:21:22.567 START TEST nvmf_perf_adq 00:21:22.567 ************************************ 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:22.567 * Looking for test storage... 00:21:22.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:22.567 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:22.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.567 --rc genhtml_branch_coverage=1 00:21:22.567 --rc genhtml_function_coverage=1 00:21:22.567 --rc genhtml_legend=1 00:21:22.567 --rc geninfo_all_blocks=1 00:21:22.567 --rc geninfo_unexecuted_blocks=1 00:21:22.567 00:21:22.567 ' 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:22.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.568 --rc genhtml_branch_coverage=1 00:21:22.568 --rc genhtml_function_coverage=1 00:21:22.568 --rc genhtml_legend=1 00:21:22.568 --rc geninfo_all_blocks=1 00:21:22.568 --rc geninfo_unexecuted_blocks=1 00:21:22.568 00:21:22.568 ' 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:22.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.568 --rc genhtml_branch_coverage=1 00:21:22.568 --rc genhtml_function_coverage=1 00:21:22.568 --rc genhtml_legend=1 00:21:22.568 --rc geninfo_all_blocks=1 00:21:22.568 --rc geninfo_unexecuted_blocks=1 00:21:22.568 00:21:22.568 ' 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:22.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.568 --rc genhtml_branch_coverage=1 00:21:22.568 --rc genhtml_function_coverage=1 00:21:22.568 --rc genhtml_legend=1 00:21:22.568 --rc geninfo_all_blocks=1 00:21:22.568 --rc geninfo_unexecuted_blocks=1 00:21:22.568 00:21:22.568 ' 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:22.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:22.568 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:29.155 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:29.156 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:29.156 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:29.156 Found net devices under 0000:31:00.0: cvl_0_0 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:29.156 Found net devices under 0000:31:00.1: cvl_0_1 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:29.156 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:29.417 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:30.802 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:33.350 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:38.643 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:38.643 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:38.643 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.643 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:38.644 Found net devices under 0000:31:00.0: cvl_0_0 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:38.644 Found net devices under 0000:31:00.1: cvl_0_1 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:38.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:21:38.644 00:21:38.644 --- 10.0.0.2 ping statistics --- 00:21:38.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.644 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:38.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:21:38.644 00:21:38.644 --- 10.0.0.1 ping statistics --- 00:21:38.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.644 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3829138 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3829138 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3829138 ']' 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:38.644 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.644 [2024-11-06 15:33:56.457072] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:21:38.644 [2024-11-06 15:33:56.457134] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.644 [2024-11-06 15:33:56.558388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:38.644 [2024-11-06 15:33:56.612090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.644 [2024-11-06 15:33:56.612141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.644 [2024-11-06 15:33:56.612150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.644 [2024-11-06 15:33:56.612157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.644 [2024-11-06 15:33:56.612164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.644 [2024-11-06 15:33:56.614223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.644 [2024-11-06 15:33:56.614385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.644 [2024-11-06 15:33:56.614543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.644 [2024-11-06 15:33:56.614543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.589 [2024-11-06 15:33:57.488523] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.589 Malloc1 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.589 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.589 [2024-11-06 15:33:57.565633] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.850 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.850 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3829491 00:21:39.850 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:39.850 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:41.766 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:41.766 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.766 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.766 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.766 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:41.766 "tick_rate": 2400000000, 00:21:41.766 "poll_groups": [ 00:21:41.766 { 00:21:41.766 "name": "nvmf_tgt_poll_group_000", 00:21:41.766 "admin_qpairs": 1, 00:21:41.766 "io_qpairs": 1, 00:21:41.766 "current_admin_qpairs": 1, 00:21:41.766 "current_io_qpairs": 1, 00:21:41.766 "pending_bdev_io": 0, 00:21:41.766 "completed_nvme_io": 16482, 00:21:41.766 "transports": [ 00:21:41.766 { 00:21:41.766 "trtype": "TCP" 00:21:41.766 } 00:21:41.766 ] 00:21:41.766 }, 00:21:41.766 { 00:21:41.766 "name": "nvmf_tgt_poll_group_001", 00:21:41.766 "admin_qpairs": 0, 00:21:41.766 "io_qpairs": 1, 00:21:41.766 "current_admin_qpairs": 0, 00:21:41.766 "current_io_qpairs": 1, 00:21:41.766 "pending_bdev_io": 0, 00:21:41.766 "completed_nvme_io": 16789, 00:21:41.766 "transports": [ 00:21:41.766 { 00:21:41.766 "trtype": "TCP" 00:21:41.766 } 00:21:41.766 ] 00:21:41.766 }, 00:21:41.766 { 00:21:41.766 "name": "nvmf_tgt_poll_group_002", 00:21:41.766 "admin_qpairs": 0, 00:21:41.766 "io_qpairs": 1, 00:21:41.766 "current_admin_qpairs": 0, 00:21:41.766 "current_io_qpairs": 1, 00:21:41.766 "pending_bdev_io": 0, 00:21:41.766 "completed_nvme_io": 18586, 00:21:41.766 "transports": [ 00:21:41.766 { 00:21:41.766 "trtype": "TCP" 00:21:41.766 } 00:21:41.766 ] 00:21:41.766 }, 00:21:41.766 { 00:21:41.766 "name": "nvmf_tgt_poll_group_003", 00:21:41.766 "admin_qpairs": 0, 00:21:41.766 "io_qpairs": 1, 00:21:41.766 "current_admin_qpairs": 0, 00:21:41.766 "current_io_qpairs": 1, 00:21:41.766 "pending_bdev_io": 0, 00:21:41.766 "completed_nvme_io": 16738, 00:21:41.766 "transports": [ 00:21:41.766 { 00:21:41.766 "trtype": "TCP" 00:21:41.766 } 00:21:41.766 ] 00:21:41.766 } 00:21:41.766 ] 00:21:41.766 }' 00:21:41.766 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:41.766 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:41.766 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:41.766 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:41.766 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3829491 00:21:49.904 Initializing NVMe Controllers 00:21:49.904 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:49.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:49.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:49.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:49.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:49.904 Initialization complete. Launching workers. 00:21:49.904 ======================================================== 00:21:49.904 Latency(us) 00:21:49.904 Device Information : IOPS MiB/s Average min max 00:21:49.904 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12357.70 48.27 5179.28 1359.64 10904.98 00:21:49.904 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12889.70 50.35 4964.91 1205.66 13610.49 00:21:49.904 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13329.60 52.07 4800.97 1231.48 13725.53 00:21:49.904 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12969.00 50.66 4935.44 1251.58 14611.55 00:21:49.904 ======================================================== 00:21:49.904 Total : 51545.98 201.35 4966.49 1205.66 14611.55 00:21:49.904 00:21:49.904 [2024-11-06 15:34:07.733316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbda70 is same with the state(6) to be set 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:49.904 rmmod nvme_tcp 00:21:49.904 rmmod nvme_fabrics 00:21:49.904 rmmod nvme_keyring 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3829138 ']' 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3829138 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3829138 ']' 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3829138 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3829138 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3829138' 00:21:49.904 killing process with pid 3829138 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3829138 00:21:49.904 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3829138 00:21:50.165 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:50.165 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:50.165 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:50.165 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:50.165 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:50.165 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:50.165 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:50.165 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:50.165 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:50.165 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.165 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.165 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.708 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:52.708 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:52.708 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:52.708 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:54.092 15:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:56.005 15:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:01.296 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:01.296 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:01.296 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.296 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:01.296 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:01.296 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:01.296 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.296 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.296 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.296 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:01.296 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:01.297 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:01.297 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:01.297 Found net devices under 0000:31:00.0: cvl_0_0 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:01.297 Found net devices under 0000:31:00.1: cvl_0_1 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:01.297 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:01.297 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:01.297 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:01.297 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:01.297 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:01.297 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:01.297 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:01.559 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:01.559 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:01.559 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:01.559 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:01.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:22:01.559 00:22:01.559 --- 10.0.0.2 ping statistics --- 00:22:01.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.559 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:22:01.559 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:01.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:22:01.559 00:22:01.559 --- 10.0.0.1 ping statistics --- 00:22:01.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.559 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:22:01.559 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.559 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:01.559 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:01.559 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.559 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:01.559 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:01.559 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.559 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:01.559 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:01.559 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:01.559 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:01.559 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:01.559 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:01.559 net.core.busy_poll = 1 00:22:01.559 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:01.559 net.core.busy_read = 1 00:22:01.559 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:01.559 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:01.827 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:01.827 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:01.827 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:01.827 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:01.827 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:01.827 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:01.827 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.827 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3833959 00:22:01.827 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3833959 00:22:01.827 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:01.827 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3833959 ']' 00:22:01.827 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.827 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:01.827 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.827 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:01.827 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.827 [2024-11-06 15:34:19.716170] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:22:01.827 [2024-11-06 15:34:19.716238] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.090 [2024-11-06 15:34:19.817628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:02.090 [2024-11-06 15:34:19.871772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.090 [2024-11-06 15:34:19.871837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.090 [2024-11-06 15:34:19.871847] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.090 [2024-11-06 15:34:19.871854] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.090 [2024-11-06 15:34:19.871861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.090 [2024-11-06 15:34:19.873970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.090 [2024-11-06 15:34:19.874130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.090 [2024-11-06 15:34:19.874286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.090 [2024-11-06 15:34:19.874288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:02.662 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:02.662 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:22:02.662 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:02.662 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:02.662 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.662 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.662 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:02.662 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:02.662 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:02.662 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.662 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.662 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.662 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:02.662 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:02.662 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.662 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.662 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.662 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:02.662 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.662 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.922 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.922 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:02.922 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.922 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.922 [2024-11-06 15:34:20.736249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.922 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.922 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:02.922 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.922 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.923 Malloc1 00:22:02.923 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.923 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:02.923 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.923 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.923 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.923 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:02.923 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.923 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.923 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.923 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:02.923 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.923 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.923 [2024-11-06 15:34:20.816648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.923 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.923 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3834315 00:22:02.923 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:02.923 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:05.466 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:05.466 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.466 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.466 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.466 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:05.466 "tick_rate": 2400000000, 00:22:05.466 "poll_groups": [ 00:22:05.466 { 00:22:05.466 "name": "nvmf_tgt_poll_group_000", 00:22:05.466 "admin_qpairs": 1, 00:22:05.466 "io_qpairs": 2, 00:22:05.466 "current_admin_qpairs": 1, 00:22:05.466 "current_io_qpairs": 2, 00:22:05.466 "pending_bdev_io": 0, 00:22:05.466 "completed_nvme_io": 27605, 00:22:05.466 "transports": [ 00:22:05.466 { 00:22:05.466 "trtype": "TCP" 00:22:05.466 } 00:22:05.466 ] 00:22:05.466 }, 00:22:05.466 { 00:22:05.466 "name": "nvmf_tgt_poll_group_001", 00:22:05.466 "admin_qpairs": 0, 00:22:05.466 "io_qpairs": 2, 00:22:05.466 "current_admin_qpairs": 0, 00:22:05.466 "current_io_qpairs": 2, 00:22:05.466 "pending_bdev_io": 0, 00:22:05.466 "completed_nvme_io": 29747, 00:22:05.467 "transports": [ 00:22:05.467 { 00:22:05.467 "trtype": "TCP" 00:22:05.467 } 00:22:05.467 ] 00:22:05.467 }, 00:22:05.467 { 00:22:05.467 "name": "nvmf_tgt_poll_group_002", 00:22:05.467 "admin_qpairs": 0, 00:22:05.467 "io_qpairs": 0, 00:22:05.467 "current_admin_qpairs": 0, 00:22:05.467 "current_io_qpairs": 0, 00:22:05.467 "pending_bdev_io": 0, 00:22:05.467 "completed_nvme_io": 0, 00:22:05.467 "transports": [ 00:22:05.467 { 00:22:05.467 "trtype": "TCP" 00:22:05.467 } 00:22:05.467 ] 00:22:05.467 }, 00:22:05.467 { 00:22:05.467 "name": "nvmf_tgt_poll_group_003", 00:22:05.467 "admin_qpairs": 0, 00:22:05.467 "io_qpairs": 0, 00:22:05.467 "current_admin_qpairs": 0, 00:22:05.467 "current_io_qpairs": 0, 00:22:05.467 "pending_bdev_io": 0, 00:22:05.467 "completed_nvme_io": 0, 00:22:05.467 "transports": [ 00:22:05.467 { 00:22:05.467 "trtype": "TCP" 00:22:05.467 } 00:22:05.467 ] 00:22:05.467 } 00:22:05.467 ] 00:22:05.467 }' 00:22:05.467 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:05.467 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:05.467 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:05.467 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:05.467 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3834315 00:22:13.667 Initializing NVMe Controllers 00:22:13.667 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:13.667 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:13.667 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:13.667 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:13.667 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:13.667 Initialization complete. Launching workers. 00:22:13.667 ======================================================== 00:22:13.667 Latency(us) 00:22:13.667 Device Information : IOPS MiB/s Average min max 00:22:13.667 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9748.80 38.08 6565.52 1065.97 52499.80 00:22:13.667 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9224.00 36.03 6959.25 900.41 52077.30 00:22:13.667 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9692.30 37.86 6602.94 1089.03 51296.36 00:22:13.667 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8694.50 33.96 7381.44 1091.97 54059.92 00:22:13.667 ======================================================== 00:22:13.667 Total : 37359.60 145.94 6862.32 900.41 54059.92 00:22:13.667 00:22:13.667 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:13.667 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:13.667 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:13.667 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:13.667 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:13.667 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:13.667 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:13.667 rmmod nvme_tcp 00:22:13.667 rmmod nvme_fabrics 00:22:13.667 rmmod nvme_keyring 00:22:13.667 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:13.667 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:13.667 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:13.667 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3833959 ']' 00:22:13.667 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3833959 00:22:13.667 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3833959 ']' 00:22:13.667 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3833959 00:22:13.667 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:22:13.667 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:13.667 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3833959 00:22:13.668 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:13.668 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:13.668 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3833959' 00:22:13.668 killing process with pid 3833959 00:22:13.668 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3833959 00:22:13.668 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3833959 00:22:13.668 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:13.668 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:13.668 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:13.668 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:13.668 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:13.668 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:13.668 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:13.668 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:13.668 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:13.668 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.668 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.668 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:17.021 00:22:17.021 real 0m54.698s 00:22:17.021 user 2m50.173s 00:22:17.021 sys 0m11.598s 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.021 ************************************ 00:22:17.021 END TEST nvmf_perf_adq 00:22:17.021 ************************************ 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:17.021 ************************************ 00:22:17.021 START TEST nvmf_shutdown 00:22:17.021 ************************************ 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:17.021 * Looking for test storage... 00:22:17.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:17.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.021 --rc genhtml_branch_coverage=1 00:22:17.021 --rc genhtml_function_coverage=1 00:22:17.021 --rc genhtml_legend=1 00:22:17.021 --rc geninfo_all_blocks=1 00:22:17.021 --rc geninfo_unexecuted_blocks=1 00:22:17.021 00:22:17.021 ' 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:17.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.021 --rc genhtml_branch_coverage=1 00:22:17.021 --rc genhtml_function_coverage=1 00:22:17.021 --rc genhtml_legend=1 00:22:17.021 --rc geninfo_all_blocks=1 00:22:17.021 --rc geninfo_unexecuted_blocks=1 00:22:17.021 00:22:17.021 ' 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:17.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.021 --rc genhtml_branch_coverage=1 00:22:17.021 --rc genhtml_function_coverage=1 00:22:17.021 --rc genhtml_legend=1 00:22:17.021 --rc geninfo_all_blocks=1 00:22:17.021 --rc geninfo_unexecuted_blocks=1 00:22:17.021 00:22:17.021 ' 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:17.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.021 --rc genhtml_branch_coverage=1 00:22:17.021 --rc genhtml_function_coverage=1 00:22:17.021 --rc genhtml_legend=1 00:22:17.021 --rc geninfo_all_blocks=1 00:22:17.021 --rc geninfo_unexecuted_blocks=1 00:22:17.021 00:22:17.021 ' 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.021 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:17.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:17.022 ************************************ 00:22:17.022 START TEST nvmf_shutdown_tc1 00:22:17.022 ************************************ 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:17.022 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:25.219 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.219 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:25.219 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:25.220 Found net devices under 0000:31:00.0: cvl_0_0 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:25.220 Found net devices under 0000:31:00.1: cvl_0_1 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.220 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:25.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:22:25.220 00:22:25.220 --- 10.0.0.2 ping statistics --- 00:22:25.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.220 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:25.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:22:25.220 00:22:25.220 --- 10.0.0.1 ping statistics --- 00:22:25.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.220 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3840821 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3840821 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3840821 ']' 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:25.220 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.220 [2024-11-06 15:34:42.262378] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:22:25.220 [2024-11-06 15:34:42.262432] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.220 [2024-11-06 15:34:42.360226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:25.220 [2024-11-06 15:34:42.412301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.220 [2024-11-06 15:34:42.412350] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.221 [2024-11-06 15:34:42.412359] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.221 [2024-11-06 15:34:42.412365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.221 [2024-11-06 15:34:42.412371] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.221 [2024-11-06 15:34:42.414762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.221 [2024-11-06 15:34:42.414907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:25.221 [2024-11-06 15:34:42.415146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:25.221 [2024-11-06 15:34:42.415148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.221 [2024-11-06 15:34:43.100901] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.221 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.221 Malloc1 00:22:25.482 [2024-11-06 15:34:43.218784] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.482 Malloc2 00:22:25.482 Malloc3 00:22:25.482 Malloc4 00:22:25.482 Malloc5 00:22:25.482 Malloc6 00:22:25.482 Malloc7 00:22:25.744 Malloc8 00:22:25.744 Malloc9 00:22:25.744 Malloc10 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3841200 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3841200 /var/tmp/bdevperf.sock 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3841200 ']' 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.744 { 00:22:25.744 "params": { 00:22:25.744 "name": "Nvme$subsystem", 00:22:25.744 "trtype": "$TEST_TRANSPORT", 00:22:25.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.744 "adrfam": "ipv4", 00:22:25.744 "trsvcid": "$NVMF_PORT", 00:22:25.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.744 "hdgst": ${hdgst:-false}, 00:22:25.744 "ddgst": ${ddgst:-false} 00:22:25.744 }, 00:22:25.744 "method": "bdev_nvme_attach_controller" 00:22:25.744 } 00:22:25.744 EOF 00:22:25.744 )") 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.744 { 00:22:25.744 "params": { 00:22:25.744 "name": "Nvme$subsystem", 00:22:25.744 "trtype": "$TEST_TRANSPORT", 00:22:25.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.744 "adrfam": "ipv4", 00:22:25.744 "trsvcid": "$NVMF_PORT", 00:22:25.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.744 "hdgst": ${hdgst:-false}, 00:22:25.744 "ddgst": ${ddgst:-false} 00:22:25.744 }, 00:22:25.744 "method": "bdev_nvme_attach_controller" 00:22:25.744 } 00:22:25.744 EOF 00:22:25.744 )") 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.744 { 00:22:25.744 "params": { 00:22:25.744 "name": "Nvme$subsystem", 00:22:25.744 "trtype": "$TEST_TRANSPORT", 00:22:25.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.744 "adrfam": "ipv4", 00:22:25.744 "trsvcid": "$NVMF_PORT", 00:22:25.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.744 "hdgst": ${hdgst:-false}, 00:22:25.744 "ddgst": ${ddgst:-false} 00:22:25.744 }, 00:22:25.744 "method": "bdev_nvme_attach_controller" 00:22:25.744 } 00:22:25.744 EOF 00:22:25.744 )") 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.744 { 00:22:25.744 "params": { 00:22:25.744 "name": "Nvme$subsystem", 00:22:25.744 "trtype": "$TEST_TRANSPORT", 00:22:25.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.744 "adrfam": "ipv4", 00:22:25.744 "trsvcid": "$NVMF_PORT", 00:22:25.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.744 "hdgst": ${hdgst:-false}, 00:22:25.744 "ddgst": ${ddgst:-false} 00:22:25.744 }, 00:22:25.744 "method": "bdev_nvme_attach_controller" 00:22:25.744 } 00:22:25.744 EOF 00:22:25.744 )") 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.744 { 00:22:25.744 "params": { 00:22:25.744 "name": "Nvme$subsystem", 00:22:25.744 "trtype": "$TEST_TRANSPORT", 00:22:25.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.744 "adrfam": "ipv4", 00:22:25.744 "trsvcid": "$NVMF_PORT", 00:22:25.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.744 "hdgst": ${hdgst:-false}, 00:22:25.744 "ddgst": ${ddgst:-false} 00:22:25.744 }, 00:22:25.744 "method": "bdev_nvme_attach_controller" 00:22:25.744 } 00:22:25.744 EOF 00:22:25.744 )") 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.744 { 00:22:25.744 "params": { 00:22:25.744 "name": "Nvme$subsystem", 00:22:25.744 "trtype": "$TEST_TRANSPORT", 00:22:25.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.744 "adrfam": "ipv4", 00:22:25.744 "trsvcid": "$NVMF_PORT", 00:22:25.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.744 "hdgst": ${hdgst:-false}, 00:22:25.744 "ddgst": ${ddgst:-false} 00:22:25.744 }, 00:22:25.744 "method": "bdev_nvme_attach_controller" 00:22:25.744 } 00:22:25.744 EOF 00:22:25.744 )") 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:25.744 [2024-11-06 15:34:43.675996] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:22:25.744 [2024-11-06 15:34:43.676052] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.744 { 00:22:25.744 "params": { 00:22:25.744 "name": "Nvme$subsystem", 00:22:25.744 "trtype": "$TEST_TRANSPORT", 00:22:25.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.744 "adrfam": "ipv4", 00:22:25.744 "trsvcid": "$NVMF_PORT", 00:22:25.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.744 "hdgst": ${hdgst:-false}, 00:22:25.744 "ddgst": ${ddgst:-false} 00:22:25.744 }, 00:22:25.744 "method": "bdev_nvme_attach_controller" 00:22:25.744 } 00:22:25.744 EOF 00:22:25.744 )") 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.744 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.744 { 00:22:25.744 "params": { 00:22:25.744 "name": "Nvme$subsystem", 00:22:25.744 "trtype": "$TEST_TRANSPORT", 00:22:25.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.744 "adrfam": "ipv4", 00:22:25.744 "trsvcid": "$NVMF_PORT", 00:22:25.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.744 "hdgst": ${hdgst:-false}, 00:22:25.744 "ddgst": ${ddgst:-false} 00:22:25.744 }, 00:22:25.744 "method": "bdev_nvme_attach_controller" 00:22:25.745 } 00:22:25.745 EOF 00:22:25.745 )") 00:22:25.745 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:25.745 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.745 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.745 { 00:22:25.745 "params": { 00:22:25.745 "name": "Nvme$subsystem", 00:22:25.745 "trtype": "$TEST_TRANSPORT", 00:22:25.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.745 "adrfam": "ipv4", 00:22:25.745 "trsvcid": "$NVMF_PORT", 00:22:25.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.745 "hdgst": ${hdgst:-false}, 00:22:25.745 "ddgst": ${ddgst:-false} 00:22:25.745 }, 00:22:25.745 "method": "bdev_nvme_attach_controller" 00:22:25.745 } 00:22:25.745 EOF 00:22:25.745 )") 00:22:25.745 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:25.745 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.745 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.745 { 00:22:25.745 "params": { 00:22:25.745 "name": "Nvme$subsystem", 00:22:25.745 "trtype": "$TEST_TRANSPORT", 00:22:25.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.745 "adrfam": "ipv4", 00:22:25.745 "trsvcid": "$NVMF_PORT", 00:22:25.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.745 "hdgst": ${hdgst:-false}, 00:22:25.745 "ddgst": ${ddgst:-false} 00:22:25.745 }, 00:22:25.745 "method": "bdev_nvme_attach_controller" 00:22:25.745 } 00:22:25.745 EOF 00:22:25.745 )") 00:22:25.745 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:25.745 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:25.745 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:25.745 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:25.745 "params": { 00:22:25.745 "name": "Nvme1", 00:22:25.745 "trtype": "tcp", 00:22:25.745 "traddr": "10.0.0.2", 00:22:25.745 "adrfam": "ipv4", 00:22:25.745 "trsvcid": "4420", 00:22:25.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:25.745 "hdgst": false, 00:22:25.745 "ddgst": false 00:22:25.745 }, 00:22:25.745 "method": "bdev_nvme_attach_controller" 00:22:25.745 },{ 00:22:25.745 "params": { 00:22:25.745 "name": "Nvme2", 00:22:25.745 "trtype": "tcp", 00:22:25.745 "traddr": "10.0.0.2", 00:22:25.745 "adrfam": "ipv4", 00:22:25.745 "trsvcid": "4420", 00:22:25.745 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:25.745 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:25.745 "hdgst": false, 00:22:25.745 "ddgst": false 00:22:25.745 }, 00:22:25.745 "method": "bdev_nvme_attach_controller" 00:22:25.745 },{ 00:22:25.745 "params": { 00:22:25.745 "name": "Nvme3", 00:22:25.745 "trtype": "tcp", 00:22:25.745 "traddr": "10.0.0.2", 00:22:25.745 "adrfam": "ipv4", 00:22:25.745 "trsvcid": "4420", 00:22:25.745 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:25.745 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:25.745 "hdgst": false, 00:22:25.745 "ddgst": false 00:22:25.745 }, 00:22:25.745 "method": "bdev_nvme_attach_controller" 00:22:25.745 },{ 00:22:25.745 "params": { 00:22:25.745 "name": "Nvme4", 00:22:25.745 "trtype": "tcp", 00:22:25.745 "traddr": "10.0.0.2", 00:22:25.745 "adrfam": "ipv4", 00:22:25.745 "trsvcid": "4420", 00:22:25.745 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:25.745 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:25.745 "hdgst": false, 00:22:25.745 "ddgst": false 00:22:25.745 }, 00:22:25.745 "method": "bdev_nvme_attach_controller" 00:22:25.745 },{ 00:22:25.745 "params": { 00:22:25.745 "name": "Nvme5", 00:22:25.745 "trtype": "tcp", 00:22:25.745 "traddr": "10.0.0.2", 00:22:25.745 "adrfam": "ipv4", 00:22:25.745 "trsvcid": "4420", 00:22:25.745 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:25.745 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:25.745 "hdgst": false, 00:22:25.745 "ddgst": false 00:22:25.745 }, 00:22:25.745 "method": "bdev_nvme_attach_controller" 00:22:25.745 },{ 00:22:25.745 "params": { 00:22:25.745 "name": "Nvme6", 00:22:25.745 "trtype": "tcp", 00:22:25.745 "traddr": "10.0.0.2", 00:22:25.745 "adrfam": "ipv4", 00:22:25.745 "trsvcid": "4420", 00:22:25.745 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:25.745 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:25.745 "hdgst": false, 00:22:25.745 "ddgst": false 00:22:25.745 }, 00:22:25.745 "method": "bdev_nvme_attach_controller" 00:22:25.745 },{ 00:22:25.745 "params": { 00:22:25.745 "name": "Nvme7", 00:22:25.745 "trtype": "tcp", 00:22:25.745 "traddr": "10.0.0.2", 00:22:25.745 "adrfam": "ipv4", 00:22:25.745 "trsvcid": "4420", 00:22:25.745 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:25.745 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:25.745 "hdgst": false, 00:22:25.745 "ddgst": false 00:22:25.745 }, 00:22:25.745 "method": "bdev_nvme_attach_controller" 00:22:25.745 },{ 00:22:25.745 "params": { 00:22:25.745 "name": "Nvme8", 00:22:25.745 "trtype": "tcp", 00:22:25.745 "traddr": "10.0.0.2", 00:22:25.745 "adrfam": "ipv4", 00:22:25.745 "trsvcid": "4420", 00:22:25.745 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:25.745 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:25.745 "hdgst": false, 00:22:25.745 "ddgst": false 00:22:25.745 }, 00:22:25.745 "method": "bdev_nvme_attach_controller" 00:22:25.745 },{ 00:22:25.745 "params": { 00:22:25.745 "name": "Nvme9", 00:22:25.745 "trtype": "tcp", 00:22:25.745 "traddr": "10.0.0.2", 00:22:25.745 "adrfam": "ipv4", 00:22:25.745 "trsvcid": "4420", 00:22:25.745 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:25.745 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:25.745 "hdgst": false, 00:22:25.745 "ddgst": false 00:22:25.745 }, 00:22:25.745 "method": "bdev_nvme_attach_controller" 00:22:25.745 },{ 00:22:25.745 "params": { 00:22:25.745 "name": "Nvme10", 00:22:25.745 "trtype": "tcp", 00:22:25.745 "traddr": "10.0.0.2", 00:22:25.745 "adrfam": "ipv4", 00:22:25.745 "trsvcid": "4420", 00:22:25.745 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:25.745 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:25.745 "hdgst": false, 00:22:25.745 "ddgst": false 00:22:25.745 }, 00:22:25.745 "method": "bdev_nvme_attach_controller" 00:22:25.745 }' 00:22:26.006 [2024-11-06 15:34:43.767310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.006 [2024-11-06 15:34:43.803294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.388 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:27.388 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:22:27.388 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:27.388 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.388 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:27.388 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.388 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3841200 00:22:27.388 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:27.388 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:28.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3841200 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:28.328 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3840821 00:22:28.328 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:28.328 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:28.328 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:28.328 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:28.328 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:28.329 { 00:22:28.329 "params": { 00:22:28.329 "name": "Nvme$subsystem", 00:22:28.329 "trtype": "$TEST_TRANSPORT", 00:22:28.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.329 "adrfam": "ipv4", 00:22:28.329 "trsvcid": "$NVMF_PORT", 00:22:28.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.329 "hdgst": ${hdgst:-false}, 00:22:28.329 "ddgst": ${ddgst:-false} 00:22:28.329 }, 00:22:28.329 "method": "bdev_nvme_attach_controller" 00:22:28.329 } 00:22:28.329 EOF 00:22:28.329 )") 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:28.329 { 00:22:28.329 "params": { 00:22:28.329 "name": "Nvme$subsystem", 00:22:28.329 "trtype": "$TEST_TRANSPORT", 00:22:28.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.329 "adrfam": "ipv4", 00:22:28.329 "trsvcid": "$NVMF_PORT", 00:22:28.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.329 "hdgst": ${hdgst:-false}, 00:22:28.329 "ddgst": ${ddgst:-false} 00:22:28.329 }, 00:22:28.329 "method": "bdev_nvme_attach_controller" 00:22:28.329 } 00:22:28.329 EOF 00:22:28.329 )") 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:28.329 { 00:22:28.329 "params": { 00:22:28.329 "name": "Nvme$subsystem", 00:22:28.329 "trtype": "$TEST_TRANSPORT", 00:22:28.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.329 "adrfam": "ipv4", 00:22:28.329 "trsvcid": "$NVMF_PORT", 00:22:28.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.329 "hdgst": ${hdgst:-false}, 00:22:28.329 "ddgst": ${ddgst:-false} 00:22:28.329 }, 00:22:28.329 "method": "bdev_nvme_attach_controller" 00:22:28.329 } 00:22:28.329 EOF 00:22:28.329 )") 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:28.329 { 00:22:28.329 "params": { 00:22:28.329 "name": "Nvme$subsystem", 00:22:28.329 "trtype": "$TEST_TRANSPORT", 00:22:28.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.329 "adrfam": "ipv4", 00:22:28.329 "trsvcid": "$NVMF_PORT", 00:22:28.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.329 "hdgst": ${hdgst:-false}, 00:22:28.329 "ddgst": ${ddgst:-false} 00:22:28.329 }, 00:22:28.329 "method": "bdev_nvme_attach_controller" 00:22:28.329 } 00:22:28.329 EOF 00:22:28.329 )") 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:28.329 { 00:22:28.329 "params": { 00:22:28.329 "name": "Nvme$subsystem", 00:22:28.329 "trtype": "$TEST_TRANSPORT", 00:22:28.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.329 "adrfam": "ipv4", 00:22:28.329 "trsvcid": "$NVMF_PORT", 00:22:28.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.329 "hdgst": ${hdgst:-false}, 00:22:28.329 "ddgst": ${ddgst:-false} 00:22:28.329 }, 00:22:28.329 "method": "bdev_nvme_attach_controller" 00:22:28.329 } 00:22:28.329 EOF 00:22:28.329 )") 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:28.329 { 00:22:28.329 "params": { 00:22:28.329 "name": "Nvme$subsystem", 00:22:28.329 "trtype": "$TEST_TRANSPORT", 00:22:28.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.329 "adrfam": "ipv4", 00:22:28.329 "trsvcid": "$NVMF_PORT", 00:22:28.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.329 "hdgst": ${hdgst:-false}, 00:22:28.329 "ddgst": ${ddgst:-false} 00:22:28.329 }, 00:22:28.329 "method": "bdev_nvme_attach_controller" 00:22:28.329 } 00:22:28.329 EOF 00:22:28.329 )") 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:28.329 { 00:22:28.329 "params": { 00:22:28.329 "name": "Nvme$subsystem", 00:22:28.329 "trtype": "$TEST_TRANSPORT", 00:22:28.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.329 "adrfam": "ipv4", 00:22:28.329 "trsvcid": "$NVMF_PORT", 00:22:28.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.329 "hdgst": ${hdgst:-false}, 00:22:28.329 "ddgst": ${ddgst:-false} 00:22:28.329 }, 00:22:28.329 "method": "bdev_nvme_attach_controller" 00:22:28.329 } 00:22:28.329 EOF 00:22:28.329 )") 00:22:28.329 [2024-11-06 15:34:46.294424] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:22:28.329 [2024-11-06 15:34:46.294474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3841584 ] 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:28.329 { 00:22:28.329 "params": { 00:22:28.329 "name": "Nvme$subsystem", 00:22:28.329 "trtype": "$TEST_TRANSPORT", 00:22:28.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.329 "adrfam": "ipv4", 00:22:28.329 "trsvcid": "$NVMF_PORT", 00:22:28.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.329 "hdgst": ${hdgst:-false}, 00:22:28.329 "ddgst": ${ddgst:-false} 00:22:28.329 }, 00:22:28.329 "method": "bdev_nvme_attach_controller" 00:22:28.329 } 00:22:28.329 EOF 00:22:28.329 )") 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:28.329 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:28.329 { 00:22:28.329 "params": { 00:22:28.329 "name": "Nvme$subsystem", 00:22:28.329 "trtype": "$TEST_TRANSPORT", 00:22:28.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.329 "adrfam": "ipv4", 00:22:28.329 "trsvcid": "$NVMF_PORT", 00:22:28.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.329 "hdgst": ${hdgst:-false}, 00:22:28.329 "ddgst": ${ddgst:-false} 00:22:28.329 }, 00:22:28.329 "method": "bdev_nvme_attach_controller" 00:22:28.329 } 00:22:28.329 EOF 00:22:28.329 )") 00:22:28.590 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:28.590 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:28.590 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:28.590 { 00:22:28.590 "params": { 00:22:28.590 "name": "Nvme$subsystem", 00:22:28.590 "trtype": "$TEST_TRANSPORT", 00:22:28.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.590 "adrfam": "ipv4", 00:22:28.590 "trsvcid": "$NVMF_PORT", 00:22:28.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.590 "hdgst": ${hdgst:-false}, 00:22:28.590 "ddgst": ${ddgst:-false} 00:22:28.590 }, 00:22:28.590 "method": "bdev_nvme_attach_controller" 00:22:28.590 } 00:22:28.590 EOF 00:22:28.590 )") 00:22:28.590 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:28.590 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:28.590 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:28.590 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:28.590 "params": { 00:22:28.590 "name": "Nvme1", 00:22:28.590 "trtype": "tcp", 00:22:28.590 "traddr": "10.0.0.2", 00:22:28.590 "adrfam": "ipv4", 00:22:28.590 "trsvcid": "4420", 00:22:28.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:28.590 "hdgst": false, 00:22:28.590 "ddgst": false 00:22:28.590 }, 00:22:28.590 "method": "bdev_nvme_attach_controller" 00:22:28.590 },{ 00:22:28.590 "params": { 00:22:28.590 "name": "Nvme2", 00:22:28.590 "trtype": "tcp", 00:22:28.590 "traddr": "10.0.0.2", 00:22:28.590 "adrfam": "ipv4", 00:22:28.590 "trsvcid": "4420", 00:22:28.590 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:28.590 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:28.590 "hdgst": false, 00:22:28.590 "ddgst": false 00:22:28.590 }, 00:22:28.590 "method": "bdev_nvme_attach_controller" 00:22:28.590 },{ 00:22:28.590 "params": { 00:22:28.590 "name": "Nvme3", 00:22:28.590 "trtype": "tcp", 00:22:28.590 "traddr": "10.0.0.2", 00:22:28.590 "adrfam": "ipv4", 00:22:28.590 "trsvcid": "4420", 00:22:28.590 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:28.590 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:28.590 "hdgst": false, 00:22:28.590 "ddgst": false 00:22:28.590 }, 00:22:28.590 "method": "bdev_nvme_attach_controller" 00:22:28.590 },{ 00:22:28.590 "params": { 00:22:28.590 "name": "Nvme4", 00:22:28.590 "trtype": "tcp", 00:22:28.590 "traddr": "10.0.0.2", 00:22:28.590 "adrfam": "ipv4", 00:22:28.590 "trsvcid": "4420", 00:22:28.590 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:28.590 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:28.590 "hdgst": false, 00:22:28.590 "ddgst": false 00:22:28.590 }, 00:22:28.590 "method": "bdev_nvme_attach_controller" 00:22:28.590 },{ 00:22:28.590 "params": { 00:22:28.590 "name": "Nvme5", 00:22:28.590 "trtype": "tcp", 00:22:28.590 "traddr": "10.0.0.2", 00:22:28.590 "adrfam": "ipv4", 00:22:28.590 "trsvcid": "4420", 00:22:28.590 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:28.590 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:28.590 "hdgst": false, 00:22:28.590 "ddgst": false 00:22:28.590 }, 00:22:28.590 "method": "bdev_nvme_attach_controller" 00:22:28.590 },{ 00:22:28.590 "params": { 00:22:28.590 "name": "Nvme6", 00:22:28.590 "trtype": "tcp", 00:22:28.590 "traddr": "10.0.0.2", 00:22:28.590 "adrfam": "ipv4", 00:22:28.590 "trsvcid": "4420", 00:22:28.590 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:28.590 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:28.590 "hdgst": false, 00:22:28.590 "ddgst": false 00:22:28.590 }, 00:22:28.590 "method": "bdev_nvme_attach_controller" 00:22:28.590 },{ 00:22:28.590 "params": { 00:22:28.590 "name": "Nvme7", 00:22:28.590 "trtype": "tcp", 00:22:28.590 "traddr": "10.0.0.2", 00:22:28.590 "adrfam": "ipv4", 00:22:28.590 "trsvcid": "4420", 00:22:28.590 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:28.590 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:28.590 "hdgst": false, 00:22:28.590 "ddgst": false 00:22:28.590 }, 00:22:28.590 "method": "bdev_nvme_attach_controller" 00:22:28.590 },{ 00:22:28.590 "params": { 00:22:28.590 "name": "Nvme8", 00:22:28.590 "trtype": "tcp", 00:22:28.590 "traddr": "10.0.0.2", 00:22:28.590 "adrfam": "ipv4", 00:22:28.590 "trsvcid": "4420", 00:22:28.590 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:28.590 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:28.590 "hdgst": false, 00:22:28.590 "ddgst": false 00:22:28.590 }, 00:22:28.590 "method": "bdev_nvme_attach_controller" 00:22:28.590 },{ 00:22:28.590 "params": { 00:22:28.590 "name": "Nvme9", 00:22:28.590 "trtype": "tcp", 00:22:28.590 "traddr": "10.0.0.2", 00:22:28.590 "adrfam": "ipv4", 00:22:28.590 "trsvcid": "4420", 00:22:28.590 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:28.590 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:28.590 "hdgst": false, 00:22:28.590 "ddgst": false 00:22:28.590 }, 00:22:28.590 "method": "bdev_nvme_attach_controller" 00:22:28.590 },{ 00:22:28.590 "params": { 00:22:28.590 "name": "Nvme10", 00:22:28.590 "trtype": "tcp", 00:22:28.590 "traddr": "10.0.0.2", 00:22:28.590 "adrfam": "ipv4", 00:22:28.590 "trsvcid": "4420", 00:22:28.590 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:28.590 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:28.590 "hdgst": false, 00:22:28.590 "ddgst": false 00:22:28.590 }, 00:22:28.590 "method": "bdev_nvme_attach_controller" 00:22:28.590 }' 00:22:28.590 [2024-11-06 15:34:46.383922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.590 [2024-11-06 15:34:46.420156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.973 Running I/O for 1 seconds... 00:22:30.914 1863.00 IOPS, 116.44 MiB/s 00:22:30.914 Latency(us) 00:22:30.914 [2024-11-06T14:34:48.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.914 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.914 Verification LBA range: start 0x0 length 0x400 00:22:30.914 Nvme1n1 : 1.16 221.19 13.82 0.00 0.00 286430.93 15182.51 248162.99 00:22:30.914 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.914 Verification LBA range: start 0x0 length 0x400 00:22:30.914 Nvme2n1 : 1.16 220.46 13.78 0.00 0.00 282741.97 16384.00 263891.63 00:22:30.914 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.914 Verification LBA range: start 0x0 length 0x400 00:22:30.914 Nvme3n1 : 1.15 223.08 13.94 0.00 0.00 274629.97 38229.33 269134.51 00:22:30.914 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.914 Verification LBA range: start 0x0 length 0x400 00:22:30.914 Nvme4n1 : 1.15 222.20 13.89 0.00 0.00 270902.40 18131.63 253405.87 00:22:30.914 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.914 Verification LBA range: start 0x0 length 0x400 00:22:30.914 Nvme5n1 : 1.17 219.54 13.72 0.00 0.00 269774.93 25340.59 255153.49 00:22:30.914 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.914 Verification LBA range: start 0x0 length 0x400 00:22:30.914 Nvme6n1 : 1.14 224.40 14.03 0.00 0.00 258611.84 18459.31 242920.11 00:22:30.914 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.914 Verification LBA range: start 0x0 length 0x400 00:22:30.914 Nvme7n1 : 1.20 267.12 16.70 0.00 0.00 214312.45 14090.24 265639.25 00:22:30.914 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.914 Verification LBA range: start 0x0 length 0x400 00:22:30.914 Nvme8n1 : 1.19 267.91 16.74 0.00 0.00 209549.31 34078.72 227191.47 00:22:30.914 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.914 Verification LBA range: start 0x0 length 0x400 00:22:30.914 Nvme9n1 : 1.20 271.07 16.94 0.00 0.00 202786.15 3522.56 241172.48 00:22:30.914 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.914 Verification LBA range: start 0x0 length 0x400 00:22:30.914 Nvme10n1 : 1.21 265.39 16.59 0.00 0.00 204754.65 5870.93 270882.13 00:22:30.914 [2024-11-06T14:34:48.897Z] =================================================================================================================== 00:22:30.915 [2024-11-06T14:34:48.898Z] Total : 2402.37 150.15 0.00 0.00 243776.79 3522.56 270882.13 00:22:31.175 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:31.175 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:31.175 rmmod nvme_tcp 00:22:31.175 rmmod nvme_fabrics 00:22:31.175 rmmod nvme_keyring 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3840821 ']' 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3840821 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 3840821 ']' 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 3840821 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3840821 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3840821' 00:22:31.175 killing process with pid 3840821 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 3840821 00:22:31.175 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 3840821 00:22:31.436 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:31.436 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:31.436 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:31.436 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:31.436 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:31.436 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:31.436 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:31.436 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:31.436 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:31.436 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.436 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.436 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:33.982 00:22:33.982 real 0m16.732s 00:22:33.982 user 0m34.009s 00:22:33.982 sys 0m6.662s 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.982 ************************************ 00:22:33.982 END TEST nvmf_shutdown_tc1 00:22:33.982 ************************************ 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:33.982 ************************************ 00:22:33.982 START TEST nvmf_shutdown_tc2 00:22:33.982 ************************************ 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.982 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:33.983 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:33.983 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:33.983 Found net devices under 0000:31:00.0: cvl_0_0 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:33.983 Found net devices under 0000:31:00.1: cvl_0_1 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:33.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:22:33.983 00:22:33.983 --- 10.0.0.2 ping statistics --- 00:22:33.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.983 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:22:33.983 00:22:33.983 --- 10.0.0.1 ping statistics --- 00:22:33.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.983 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3842821 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3842821 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3842821 ']' 00:22:33.983 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.984 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:33.984 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.984 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:33.984 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:34.244 [2024-11-06 15:34:52.007337] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:22:34.244 [2024-11-06 15:34:52.007407] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.244 [2024-11-06 15:34:52.105473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:34.244 [2024-11-06 15:34:52.146083] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.244 [2024-11-06 15:34:52.146118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.244 [2024-11-06 15:34:52.146124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.244 [2024-11-06 15:34:52.146129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.244 [2024-11-06 15:34:52.146133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.244 [2024-11-06 15:34:52.147625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.244 [2024-11-06 15:34:52.147793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:34.244 [2024-11-06 15:34:52.147958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.244 [2024-11-06 15:34:52.147958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.187 [2024-11-06 15:34:52.862044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.187 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.187 Malloc1 00:22:35.187 [2024-11-06 15:34:52.969669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.187 Malloc2 00:22:35.187 Malloc3 00:22:35.187 Malloc4 00:22:35.187 Malloc5 00:22:35.187 Malloc6 00:22:35.448 Malloc7 00:22:35.448 Malloc8 00:22:35.448 Malloc9 00:22:35.448 Malloc10 00:22:35.448 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.448 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:35.448 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:35.448 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.448 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3843077 00:22:35.448 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3843077 /var/tmp/bdevperf.sock 00:22:35.448 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3843077 ']' 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:35.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.449 { 00:22:35.449 "params": { 00:22:35.449 "name": "Nvme$subsystem", 00:22:35.449 "trtype": "$TEST_TRANSPORT", 00:22:35.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.449 "adrfam": "ipv4", 00:22:35.449 "trsvcid": "$NVMF_PORT", 00:22:35.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.449 "hdgst": ${hdgst:-false}, 00:22:35.449 "ddgst": ${ddgst:-false} 00:22:35.449 }, 00:22:35.449 "method": "bdev_nvme_attach_controller" 00:22:35.449 } 00:22:35.449 EOF 00:22:35.449 )") 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.449 { 00:22:35.449 "params": { 00:22:35.449 "name": "Nvme$subsystem", 00:22:35.449 "trtype": "$TEST_TRANSPORT", 00:22:35.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.449 "adrfam": "ipv4", 00:22:35.449 "trsvcid": "$NVMF_PORT", 00:22:35.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.449 "hdgst": ${hdgst:-false}, 00:22:35.449 "ddgst": ${ddgst:-false} 00:22:35.449 }, 00:22:35.449 "method": "bdev_nvme_attach_controller" 00:22:35.449 } 00:22:35.449 EOF 00:22:35.449 )") 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.449 { 00:22:35.449 "params": { 00:22:35.449 "name": "Nvme$subsystem", 00:22:35.449 "trtype": "$TEST_TRANSPORT", 00:22:35.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.449 "adrfam": "ipv4", 00:22:35.449 "trsvcid": "$NVMF_PORT", 00:22:35.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.449 "hdgst": ${hdgst:-false}, 00:22:35.449 "ddgst": ${ddgst:-false} 00:22:35.449 }, 00:22:35.449 "method": "bdev_nvme_attach_controller" 00:22:35.449 } 00:22:35.449 EOF 00:22:35.449 )") 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.449 { 00:22:35.449 "params": { 00:22:35.449 "name": "Nvme$subsystem", 00:22:35.449 "trtype": "$TEST_TRANSPORT", 00:22:35.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.449 "adrfam": "ipv4", 00:22:35.449 "trsvcid": "$NVMF_PORT", 00:22:35.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.449 "hdgst": ${hdgst:-false}, 00:22:35.449 "ddgst": ${ddgst:-false} 00:22:35.449 }, 00:22:35.449 "method": "bdev_nvme_attach_controller" 00:22:35.449 } 00:22:35.449 EOF 00:22:35.449 )") 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.449 { 00:22:35.449 "params": { 00:22:35.449 "name": "Nvme$subsystem", 00:22:35.449 "trtype": "$TEST_TRANSPORT", 00:22:35.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.449 "adrfam": "ipv4", 00:22:35.449 "trsvcid": "$NVMF_PORT", 00:22:35.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.449 "hdgst": ${hdgst:-false}, 00:22:35.449 "ddgst": ${ddgst:-false} 00:22:35.449 }, 00:22:35.449 "method": "bdev_nvme_attach_controller" 00:22:35.449 } 00:22:35.449 EOF 00:22:35.449 )") 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.449 { 00:22:35.449 "params": { 00:22:35.449 "name": "Nvme$subsystem", 00:22:35.449 "trtype": "$TEST_TRANSPORT", 00:22:35.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.449 "adrfam": "ipv4", 00:22:35.449 "trsvcid": "$NVMF_PORT", 00:22:35.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.449 "hdgst": ${hdgst:-false}, 00:22:35.449 "ddgst": ${ddgst:-false} 00:22:35.449 }, 00:22:35.449 "method": "bdev_nvme_attach_controller" 00:22:35.449 } 00:22:35.449 EOF 00:22:35.449 )") 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:35.449 [2024-11-06 15:34:53.420676] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:22:35.449 [2024-11-06 15:34:53.420731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3843077 ] 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.449 { 00:22:35.449 "params": { 00:22:35.449 "name": "Nvme$subsystem", 00:22:35.449 "trtype": "$TEST_TRANSPORT", 00:22:35.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.449 "adrfam": "ipv4", 00:22:35.449 "trsvcid": "$NVMF_PORT", 00:22:35.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.449 "hdgst": ${hdgst:-false}, 00:22:35.449 "ddgst": ${ddgst:-false} 00:22:35.449 }, 00:22:35.449 "method": "bdev_nvme_attach_controller" 00:22:35.449 } 00:22:35.449 EOF 00:22:35.449 )") 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:35.449 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.710 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.710 { 00:22:35.710 "params": { 00:22:35.710 "name": "Nvme$subsystem", 00:22:35.710 "trtype": "$TEST_TRANSPORT", 00:22:35.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.710 "adrfam": "ipv4", 00:22:35.710 "trsvcid": "$NVMF_PORT", 00:22:35.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.710 "hdgst": ${hdgst:-false}, 00:22:35.710 "ddgst": ${ddgst:-false} 00:22:35.710 }, 00:22:35.710 "method": "bdev_nvme_attach_controller" 00:22:35.710 } 00:22:35.710 EOF 00:22:35.710 )") 00:22:35.710 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:35.710 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.710 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.710 { 00:22:35.710 "params": { 00:22:35.710 "name": "Nvme$subsystem", 00:22:35.710 "trtype": "$TEST_TRANSPORT", 00:22:35.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.710 "adrfam": "ipv4", 00:22:35.710 "trsvcid": "$NVMF_PORT", 00:22:35.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.710 "hdgst": ${hdgst:-false}, 00:22:35.710 "ddgst": ${ddgst:-false} 00:22:35.710 }, 00:22:35.710 "method": "bdev_nvme_attach_controller" 00:22:35.710 } 00:22:35.710 EOF 00:22:35.710 )") 00:22:35.710 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:35.710 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.710 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.710 { 00:22:35.710 "params": { 00:22:35.710 "name": "Nvme$subsystem", 00:22:35.710 "trtype": "$TEST_TRANSPORT", 00:22:35.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.710 "adrfam": "ipv4", 00:22:35.710 "trsvcid": "$NVMF_PORT", 00:22:35.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.710 "hdgst": ${hdgst:-false}, 00:22:35.710 "ddgst": ${ddgst:-false} 00:22:35.710 }, 00:22:35.710 "method": "bdev_nvme_attach_controller" 00:22:35.710 } 00:22:35.710 EOF 00:22:35.710 )") 00:22:35.710 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:35.710 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:35.710 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:35.710 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:35.710 "params": { 00:22:35.710 "name": "Nvme1", 00:22:35.710 "trtype": "tcp", 00:22:35.710 "traddr": "10.0.0.2", 00:22:35.710 "adrfam": "ipv4", 00:22:35.710 "trsvcid": "4420", 00:22:35.710 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.710 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:35.710 "hdgst": false, 00:22:35.710 "ddgst": false 00:22:35.710 }, 00:22:35.710 "method": "bdev_nvme_attach_controller" 00:22:35.710 },{ 00:22:35.710 "params": { 00:22:35.710 "name": "Nvme2", 00:22:35.710 "trtype": "tcp", 00:22:35.710 "traddr": "10.0.0.2", 00:22:35.710 "adrfam": "ipv4", 00:22:35.710 "trsvcid": "4420", 00:22:35.710 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:35.710 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:35.710 "hdgst": false, 00:22:35.710 "ddgst": false 00:22:35.710 }, 00:22:35.710 "method": "bdev_nvme_attach_controller" 00:22:35.710 },{ 00:22:35.710 "params": { 00:22:35.710 "name": "Nvme3", 00:22:35.710 "trtype": "tcp", 00:22:35.710 "traddr": "10.0.0.2", 00:22:35.710 "adrfam": "ipv4", 00:22:35.710 "trsvcid": "4420", 00:22:35.710 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:35.710 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:35.710 "hdgst": false, 00:22:35.710 "ddgst": false 00:22:35.710 }, 00:22:35.710 "method": "bdev_nvme_attach_controller" 00:22:35.710 },{ 00:22:35.710 "params": { 00:22:35.710 "name": "Nvme4", 00:22:35.710 "trtype": "tcp", 00:22:35.710 "traddr": "10.0.0.2", 00:22:35.710 "adrfam": "ipv4", 00:22:35.710 "trsvcid": "4420", 00:22:35.710 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:35.710 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:35.710 "hdgst": false, 00:22:35.710 "ddgst": false 00:22:35.710 }, 00:22:35.710 "method": "bdev_nvme_attach_controller" 00:22:35.710 },{ 00:22:35.710 "params": { 00:22:35.710 "name": "Nvme5", 00:22:35.710 "trtype": "tcp", 00:22:35.710 "traddr": "10.0.0.2", 00:22:35.710 "adrfam": "ipv4", 00:22:35.710 "trsvcid": "4420", 00:22:35.710 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:35.710 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:35.710 "hdgst": false, 00:22:35.710 "ddgst": false 00:22:35.710 }, 00:22:35.710 "method": "bdev_nvme_attach_controller" 00:22:35.710 },{ 00:22:35.710 "params": { 00:22:35.710 "name": "Nvme6", 00:22:35.710 "trtype": "tcp", 00:22:35.710 "traddr": "10.0.0.2", 00:22:35.710 "adrfam": "ipv4", 00:22:35.710 "trsvcid": "4420", 00:22:35.710 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:35.711 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:35.711 "hdgst": false, 00:22:35.711 "ddgst": false 00:22:35.711 }, 00:22:35.711 "method": "bdev_nvme_attach_controller" 00:22:35.711 },{ 00:22:35.711 "params": { 00:22:35.711 "name": "Nvme7", 00:22:35.711 "trtype": "tcp", 00:22:35.711 "traddr": "10.0.0.2", 00:22:35.711 "adrfam": "ipv4", 00:22:35.711 "trsvcid": "4420", 00:22:35.711 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:35.711 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:35.711 "hdgst": false, 00:22:35.711 "ddgst": false 00:22:35.711 }, 00:22:35.711 "method": "bdev_nvme_attach_controller" 00:22:35.711 },{ 00:22:35.711 "params": { 00:22:35.711 "name": "Nvme8", 00:22:35.711 "trtype": "tcp", 00:22:35.711 "traddr": "10.0.0.2", 00:22:35.711 "adrfam": "ipv4", 00:22:35.711 "trsvcid": "4420", 00:22:35.711 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:35.711 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:35.711 "hdgst": false, 00:22:35.711 "ddgst": false 00:22:35.711 }, 00:22:35.711 "method": "bdev_nvme_attach_controller" 00:22:35.711 },{ 00:22:35.711 "params": { 00:22:35.711 "name": "Nvme9", 00:22:35.711 "trtype": "tcp", 00:22:35.711 "traddr": "10.0.0.2", 00:22:35.711 "adrfam": "ipv4", 00:22:35.711 "trsvcid": "4420", 00:22:35.711 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:35.711 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:35.711 "hdgst": false, 00:22:35.711 "ddgst": false 00:22:35.711 }, 00:22:35.711 "method": "bdev_nvme_attach_controller" 00:22:35.711 },{ 00:22:35.711 "params": { 00:22:35.711 "name": "Nvme10", 00:22:35.711 "trtype": "tcp", 00:22:35.711 "traddr": "10.0.0.2", 00:22:35.711 "adrfam": "ipv4", 00:22:35.711 "trsvcid": "4420", 00:22:35.711 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:35.711 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:35.711 "hdgst": false, 00:22:35.711 "ddgst": false 00:22:35.711 }, 00:22:35.711 "method": "bdev_nvme_attach_controller" 00:22:35.711 }' 00:22:35.711 [2024-11-06 15:34:53.511086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.711 [2024-11-06 15:34:53.547518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.095 Running I/O for 10 seconds... 00:22:37.095 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:37.095 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:22:37.095 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:37.095 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.095 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:37.095 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.095 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:37.095 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:37.095 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:37.095 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:37.095 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:37.095 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:37.095 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:37.095 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:37.095 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:37.095 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.095 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:37.356 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.356 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:37.356 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:37.356 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:37.617 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:37.617 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:37.617 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:37.617 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:37.617 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.617 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:37.617 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.617 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=74 00:22:37.617 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 74 -ge 100 ']' 00:22:37.617 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3843077 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3843077 ']' 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3843077 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3843077 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3843077' 00:22:37.879 killing process with pid 3843077 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3843077 00:22:37.879 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3843077 00:22:37.879 Received shutdown signal, test time was about 0.979565 seconds 00:22:37.879 00:22:37.879 Latency(us) 00:22:37.879 [2024-11-06T14:34:55.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.879 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.879 Verification LBA range: start 0x0 length 0x400 00:22:37.879 Nvme1n1 : 0.97 276.63 17.29 0.00 0.00 226871.46 8301.23 217579.52 00:22:37.879 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.879 Verification LBA range: start 0x0 length 0x400 00:22:37.879 Nvme2n1 : 0.95 202.88 12.68 0.00 0.00 305311.29 16930.13 253405.87 00:22:37.879 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.879 Verification LBA range: start 0x0 length 0x400 00:22:37.879 Nvme3n1 : 0.98 261.58 16.35 0.00 0.00 232113.71 16930.13 249910.61 00:22:37.879 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.879 Verification LBA range: start 0x0 length 0x400 00:22:37.879 Nvme4n1 : 0.96 269.75 16.86 0.00 0.00 218290.86 7372.80 249910.61 00:22:37.879 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.879 Verification LBA range: start 0x0 length 0x400 00:22:37.879 Nvme5n1 : 0.97 263.97 16.50 0.00 0.00 220147.41 22282.24 253405.87 00:22:37.879 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.879 Verification LBA range: start 0x0 length 0x400 00:22:37.879 Nvme6n1 : 0.95 201.26 12.58 0.00 0.00 282022.68 36263.25 258648.75 00:22:37.879 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.879 Verification LBA range: start 0x0 length 0x400 00:22:37.879 Nvme7n1 : 0.96 265.69 16.61 0.00 0.00 208650.24 14745.60 225443.84 00:22:37.879 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.879 Verification LBA range: start 0x0 length 0x400 00:22:37.879 Nvme8n1 : 0.97 263.02 16.44 0.00 0.00 206542.29 17367.04 249910.61 00:22:37.879 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.879 Verification LBA range: start 0x0 length 0x400 00:22:37.879 Nvme9n1 : 0.94 204.33 12.77 0.00 0.00 257815.89 19770.03 246415.36 00:22:37.879 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:37.879 Verification LBA range: start 0x0 length 0x400 00:22:37.879 Nvme10n1 : 0.96 200.33 12.52 0.00 0.00 257632.14 23702.19 276125.01 00:22:37.879 [2024-11-06T14:34:55.862Z] =================================================================================================================== 00:22:37.879 [2024-11-06T14:34:55.862Z] Total : 2409.43 150.59 0.00 0.00 237667.94 7372.80 276125.01 00:22:38.140 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:39.082 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3842821 00:22:39.082 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:39.082 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:39.082 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:39.082 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:39.082 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:39.082 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:39.082 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:39.082 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:39.082 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:39.082 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:39.082 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:39.082 rmmod nvme_tcp 00:22:39.082 rmmod nvme_fabrics 00:22:39.082 rmmod nvme_keyring 00:22:39.343 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:39.343 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:39.343 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:39.343 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3842821 ']' 00:22:39.343 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3842821 00:22:39.343 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3842821 ']' 00:22:39.343 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3842821 00:22:39.343 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:22:39.343 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:39.343 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3842821 00:22:39.343 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:39.343 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:39.343 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3842821' 00:22:39.343 killing process with pid 3842821 00:22:39.343 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3842821 00:22:39.343 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3842821 00:22:39.605 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:39.605 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:39.605 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:39.605 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:39.605 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:39.605 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:39.605 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:39.605 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:39.605 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:39.605 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.605 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.605 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.519 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:41.519 00:22:41.519 real 0m7.909s 00:22:41.519 user 0m23.688s 00:22:41.519 sys 0m1.307s 00:22:41.519 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:41.519 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.519 ************************************ 00:22:41.519 END TEST nvmf_shutdown_tc2 00:22:41.519 ************************************ 00:22:41.519 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:41.519 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:41.519 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:41.519 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:41.781 ************************************ 00:22:41.781 START TEST nvmf_shutdown_tc3 00:22:41.781 ************************************ 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.781 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:41.782 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:41.782 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:41.782 Found net devices under 0000:31:00.0: cvl_0_0 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:41.782 Found net devices under 0000:31:00.1: cvl_0_1 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:41.782 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:42.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:22:42.044 00:22:42.044 --- 10.0.0.2 ping statistics --- 00:22:42.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.044 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:22:42.044 00:22:42.044 --- 10.0.0.1 ping statistics --- 00:22:42.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.044 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3844538 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3844538 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3844538 ']' 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:42.044 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.044 [2024-11-06 15:34:59.993436] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:22:42.044 [2024-11-06 15:34:59.993485] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.305 [2024-11-06 15:35:00.058067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:42.305 [2024-11-06 15:35:00.091263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.305 [2024-11-06 15:35:00.091294] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.305 [2024-11-06 15:35:00.091300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.305 [2024-11-06 15:35:00.091305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.305 [2024-11-06 15:35:00.091309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.305 [2024-11-06 15:35:00.092601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.305 [2024-11-06 15:35:00.092777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:42.305 [2024-11-06 15:35:00.092962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.305 [2024-11-06 15:35:00.092963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.305 [2024-11-06 15:35:00.230796] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.305 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:42.565 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.565 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:42.565 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:42.565 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.565 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.565 Malloc1 00:22:42.565 [2024-11-06 15:35:00.341837] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.565 Malloc2 00:22:42.565 Malloc3 00:22:42.565 Malloc4 00:22:42.565 Malloc5 00:22:42.565 Malloc6 00:22:42.826 Malloc7 00:22:42.826 Malloc8 00:22:42.826 Malloc9 00:22:42.826 Malloc10 00:22:42.826 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.826 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:42.826 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:42.826 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.826 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3844719 00:22:42.826 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3844719 /var/tmp/bdevperf.sock 00:22:42.826 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3844719 ']' 00:22:42.826 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:42.826 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:42.826 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:42.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:42.826 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:42.826 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:42.826 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.827 { 00:22:42.827 "params": { 00:22:42.827 "name": "Nvme$subsystem", 00:22:42.827 "trtype": "$TEST_TRANSPORT", 00:22:42.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.827 "adrfam": "ipv4", 00:22:42.827 "trsvcid": "$NVMF_PORT", 00:22:42.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.827 "hdgst": ${hdgst:-false}, 00:22:42.827 "ddgst": ${ddgst:-false} 00:22:42.827 }, 00:22:42.827 "method": "bdev_nvme_attach_controller" 00:22:42.827 } 00:22:42.827 EOF 00:22:42.827 )") 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.827 { 00:22:42.827 "params": { 00:22:42.827 "name": "Nvme$subsystem", 00:22:42.827 "trtype": "$TEST_TRANSPORT", 00:22:42.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.827 "adrfam": "ipv4", 00:22:42.827 "trsvcid": "$NVMF_PORT", 00:22:42.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.827 "hdgst": ${hdgst:-false}, 00:22:42.827 "ddgst": ${ddgst:-false} 00:22:42.827 }, 00:22:42.827 "method": "bdev_nvme_attach_controller" 00:22:42.827 } 00:22:42.827 EOF 00:22:42.827 )") 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.827 { 00:22:42.827 "params": { 00:22:42.827 "name": "Nvme$subsystem", 00:22:42.827 "trtype": "$TEST_TRANSPORT", 00:22:42.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.827 "adrfam": "ipv4", 00:22:42.827 "trsvcid": "$NVMF_PORT", 00:22:42.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.827 "hdgst": ${hdgst:-false}, 00:22:42.827 "ddgst": ${ddgst:-false} 00:22:42.827 }, 00:22:42.827 "method": "bdev_nvme_attach_controller" 00:22:42.827 } 00:22:42.827 EOF 00:22:42.827 )") 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.827 { 00:22:42.827 "params": { 00:22:42.827 "name": "Nvme$subsystem", 00:22:42.827 "trtype": "$TEST_TRANSPORT", 00:22:42.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.827 "adrfam": "ipv4", 00:22:42.827 "trsvcid": "$NVMF_PORT", 00:22:42.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.827 "hdgst": ${hdgst:-false}, 00:22:42.827 "ddgst": ${ddgst:-false} 00:22:42.827 }, 00:22:42.827 "method": "bdev_nvme_attach_controller" 00:22:42.827 } 00:22:42.827 EOF 00:22:42.827 )") 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.827 { 00:22:42.827 "params": { 00:22:42.827 "name": "Nvme$subsystem", 00:22:42.827 "trtype": "$TEST_TRANSPORT", 00:22:42.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.827 "adrfam": "ipv4", 00:22:42.827 "trsvcid": "$NVMF_PORT", 00:22:42.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.827 "hdgst": ${hdgst:-false}, 00:22:42.827 "ddgst": ${ddgst:-false} 00:22:42.827 }, 00:22:42.827 "method": "bdev_nvme_attach_controller" 00:22:42.827 } 00:22:42.827 EOF 00:22:42.827 )") 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.827 { 00:22:42.827 "params": { 00:22:42.827 "name": "Nvme$subsystem", 00:22:42.827 "trtype": "$TEST_TRANSPORT", 00:22:42.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.827 "adrfam": "ipv4", 00:22:42.827 "trsvcid": "$NVMF_PORT", 00:22:42.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.827 "hdgst": ${hdgst:-false}, 00:22:42.827 "ddgst": ${ddgst:-false} 00:22:42.827 }, 00:22:42.827 "method": "bdev_nvme_attach_controller" 00:22:42.827 } 00:22:42.827 EOF 00:22:42.827 )") 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:42.827 [2024-11-06 15:35:00.791367] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:22:42.827 [2024-11-06 15:35:00.791421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844719 ] 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.827 { 00:22:42.827 "params": { 00:22:42.827 "name": "Nvme$subsystem", 00:22:42.827 "trtype": "$TEST_TRANSPORT", 00:22:42.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.827 "adrfam": "ipv4", 00:22:42.827 "trsvcid": "$NVMF_PORT", 00:22:42.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.827 "hdgst": ${hdgst:-false}, 00:22:42.827 "ddgst": ${ddgst:-false} 00:22:42.827 }, 00:22:42.827 "method": "bdev_nvme_attach_controller" 00:22:42.827 } 00:22:42.827 EOF 00:22:42.827 )") 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.827 { 00:22:42.827 "params": { 00:22:42.827 "name": "Nvme$subsystem", 00:22:42.827 "trtype": "$TEST_TRANSPORT", 00:22:42.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.827 "adrfam": "ipv4", 00:22:42.827 "trsvcid": "$NVMF_PORT", 00:22:42.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.827 "hdgst": ${hdgst:-false}, 00:22:42.827 "ddgst": ${ddgst:-false} 00:22:42.827 }, 00:22:42.827 "method": "bdev_nvme_attach_controller" 00:22:42.827 } 00:22:42.827 EOF 00:22:42.827 )") 00:22:42.827 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:43.088 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.088 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.088 { 00:22:43.088 "params": { 00:22:43.088 "name": "Nvme$subsystem", 00:22:43.088 "trtype": "$TEST_TRANSPORT", 00:22:43.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.089 "adrfam": "ipv4", 00:22:43.089 "trsvcid": "$NVMF_PORT", 00:22:43.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.089 "hdgst": ${hdgst:-false}, 00:22:43.089 "ddgst": ${ddgst:-false} 00:22:43.089 }, 00:22:43.089 "method": "bdev_nvme_attach_controller" 00:22:43.089 } 00:22:43.089 EOF 00:22:43.089 )") 00:22:43.089 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:43.089 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.089 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.089 { 00:22:43.089 "params": { 00:22:43.089 "name": "Nvme$subsystem", 00:22:43.089 "trtype": "$TEST_TRANSPORT", 00:22:43.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.089 "adrfam": "ipv4", 00:22:43.089 "trsvcid": "$NVMF_PORT", 00:22:43.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.089 "hdgst": ${hdgst:-false}, 00:22:43.089 "ddgst": ${ddgst:-false} 00:22:43.089 }, 00:22:43.089 "method": "bdev_nvme_attach_controller" 00:22:43.089 } 00:22:43.089 EOF 00:22:43.089 )") 00:22:43.089 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:43.089 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:43.089 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:43.089 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:43.089 "params": { 00:22:43.089 "name": "Nvme1", 00:22:43.089 "trtype": "tcp", 00:22:43.089 "traddr": "10.0.0.2", 00:22:43.089 "adrfam": "ipv4", 00:22:43.089 "trsvcid": "4420", 00:22:43.089 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.089 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.089 "hdgst": false, 00:22:43.089 "ddgst": false 00:22:43.089 }, 00:22:43.089 "method": "bdev_nvme_attach_controller" 00:22:43.089 },{ 00:22:43.089 "params": { 00:22:43.089 "name": "Nvme2", 00:22:43.089 "trtype": "tcp", 00:22:43.089 "traddr": "10.0.0.2", 00:22:43.089 "adrfam": "ipv4", 00:22:43.089 "trsvcid": "4420", 00:22:43.089 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:43.089 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:43.089 "hdgst": false, 00:22:43.089 "ddgst": false 00:22:43.089 }, 00:22:43.089 "method": "bdev_nvme_attach_controller" 00:22:43.089 },{ 00:22:43.089 "params": { 00:22:43.089 "name": "Nvme3", 00:22:43.089 "trtype": "tcp", 00:22:43.089 "traddr": "10.0.0.2", 00:22:43.089 "adrfam": "ipv4", 00:22:43.089 "trsvcid": "4420", 00:22:43.089 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:43.089 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:43.089 "hdgst": false, 00:22:43.089 "ddgst": false 00:22:43.089 }, 00:22:43.089 "method": "bdev_nvme_attach_controller" 00:22:43.089 },{ 00:22:43.089 "params": { 00:22:43.089 "name": "Nvme4", 00:22:43.089 "trtype": "tcp", 00:22:43.089 "traddr": "10.0.0.2", 00:22:43.089 "adrfam": "ipv4", 00:22:43.089 "trsvcid": "4420", 00:22:43.089 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:43.089 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:43.089 "hdgst": false, 00:22:43.089 "ddgst": false 00:22:43.089 }, 00:22:43.089 "method": "bdev_nvme_attach_controller" 00:22:43.089 },{ 00:22:43.089 "params": { 00:22:43.089 "name": "Nvme5", 00:22:43.089 "trtype": "tcp", 00:22:43.089 "traddr": "10.0.0.2", 00:22:43.089 "adrfam": "ipv4", 00:22:43.089 "trsvcid": "4420", 00:22:43.089 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:43.089 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:43.089 "hdgst": false, 00:22:43.089 "ddgst": false 00:22:43.089 }, 00:22:43.089 "method": "bdev_nvme_attach_controller" 00:22:43.089 },{ 00:22:43.089 "params": { 00:22:43.089 "name": "Nvme6", 00:22:43.089 "trtype": "tcp", 00:22:43.089 "traddr": "10.0.0.2", 00:22:43.089 "adrfam": "ipv4", 00:22:43.089 "trsvcid": "4420", 00:22:43.089 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:43.089 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:43.089 "hdgst": false, 00:22:43.089 "ddgst": false 00:22:43.089 }, 00:22:43.089 "method": "bdev_nvme_attach_controller" 00:22:43.089 },{ 00:22:43.089 "params": { 00:22:43.089 "name": "Nvme7", 00:22:43.089 "trtype": "tcp", 00:22:43.089 "traddr": "10.0.0.2", 00:22:43.089 "adrfam": "ipv4", 00:22:43.089 "trsvcid": "4420", 00:22:43.089 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:43.089 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:43.089 "hdgst": false, 00:22:43.089 "ddgst": false 00:22:43.089 }, 00:22:43.089 "method": "bdev_nvme_attach_controller" 00:22:43.089 },{ 00:22:43.089 "params": { 00:22:43.089 "name": "Nvme8", 00:22:43.089 "trtype": "tcp", 00:22:43.089 "traddr": "10.0.0.2", 00:22:43.089 "adrfam": "ipv4", 00:22:43.089 "trsvcid": "4420", 00:22:43.089 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:43.089 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:43.089 "hdgst": false, 00:22:43.089 "ddgst": false 00:22:43.089 }, 00:22:43.089 "method": "bdev_nvme_attach_controller" 00:22:43.089 },{ 00:22:43.089 "params": { 00:22:43.089 "name": "Nvme9", 00:22:43.089 "trtype": "tcp", 00:22:43.089 "traddr": "10.0.0.2", 00:22:43.089 "adrfam": "ipv4", 00:22:43.089 "trsvcid": "4420", 00:22:43.089 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:43.089 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:43.089 "hdgst": false, 00:22:43.089 "ddgst": false 00:22:43.089 }, 00:22:43.089 "method": "bdev_nvme_attach_controller" 00:22:43.089 },{ 00:22:43.089 "params": { 00:22:43.089 "name": "Nvme10", 00:22:43.089 "trtype": "tcp", 00:22:43.089 "traddr": "10.0.0.2", 00:22:43.089 "adrfam": "ipv4", 00:22:43.089 "trsvcid": "4420", 00:22:43.089 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:43.089 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:43.089 "hdgst": false, 00:22:43.089 "ddgst": false 00:22:43.089 }, 00:22:43.089 "method": "bdev_nvme_attach_controller" 00:22:43.089 }' 00:22:43.089 [2024-11-06 15:35:00.881607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.089 [2024-11-06 15:35:00.918067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.002 Running I/O for 10 seconds... 00:22:45.002 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:45.003 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:22:45.003 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:45.003 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.003 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:45.003 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.003 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:45.003 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:45.003 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:45.003 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:45.003 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:45.003 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:45.003 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:45.003 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:45.003 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:45.003 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:45.003 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.003 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:45.003 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.003 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:45.003 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:45.003 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:45.263 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:45.263 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:45.263 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:45.263 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:45.263 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.263 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:45.263 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.263 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:45.263 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:45.263 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:45.531 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:45.531 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:45.531 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:45.531 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:45.531 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.531 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:45.532 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.532 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:45.532 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:45.532 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:45.532 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:45.532 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:45.532 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3844538 00:22:45.532 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3844538 ']' 00:22:45.532 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3844538 00:22:45.532 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:22:45.532 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:45.532 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3844538 00:22:45.532 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:45.532 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:45.532 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3844538' 00:22:45.532 killing process with pid 3844538 00:22:45.532 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 3844538 00:22:45.532 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 3844538 00:22:45.532 [2024-11-06 15:35:03.462895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.462942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.462949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.462954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.462959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.462964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.462969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.462974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.462978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.462983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.462988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.462993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.462997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.463237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8fa0 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.464243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.464268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.464274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.532 [2024-11-06 15:35:03.464279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.464565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823200 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.466005] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:45.533 [2024-11-06 15:35:03.468382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9470 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.468397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9470 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.468402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9470 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.468407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9470 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.468412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9470 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.468417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9470 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.468422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9470 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.468426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9470 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.469213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.469240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.469245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.469250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.469255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.469260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.469265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.469270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.469275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.469279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.469284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.469288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.469293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.469298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.469302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.469307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.533 [2024-11-06 15:35:03.469312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.469530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9940 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.534 [2024-11-06 15:35:03.470540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.470655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9e30 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca300 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.471998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.472002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.472007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.472012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.472016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.535 [2024-11-06 15:35:03.472021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.472025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.472030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.472035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ca7d0 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.472824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15caca0 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.473605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832f90 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.474065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.536 [2024-11-06 15:35:03.474079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.474348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.482106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.482126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.482132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.482138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822d10 is same with the state(6) to be set 00:22:45.537 [2024-11-06 15:35:03.486631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.537 [2024-11-06 15:35:03.486661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.537 [2024-11-06 15:35:03.486678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.537 [2024-11-06 15:35:03.486687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.537 [2024-11-06 15:35:03.486697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.537 [2024-11-06 15:35:03.486705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.537 [2024-11-06 15:35:03.486715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.537 [2024-11-06 15:35:03.486723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.537 [2024-11-06 15:35:03.486733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.537 [2024-11-06 15:35:03.486751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.537 [2024-11-06 15:35:03.486761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.537 [2024-11-06 15:35:03.486769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.537 [2024-11-06 15:35:03.486778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.537 [2024-11-06 15:35:03.486786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.537 [2024-11-06 15:35:03.486796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.537 [2024-11-06 15:35:03.486803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.537 [2024-11-06 15:35:03.486813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.537 [2024-11-06 15:35:03.486820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.537 [2024-11-06 15:35:03.486830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.537 [2024-11-06 15:35:03.486838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.537 [2024-11-06 15:35:03.486848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.537 [2024-11-06 15:35:03.486856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.537 [2024-11-06 15:35:03.486866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.486874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.486883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.486891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.486900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.486908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.486917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.486925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.486936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.486943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.486953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.486961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.486972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.486980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.486990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.486998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.538 [2024-11-06 15:35:03.487577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.538 [2024-11-06 15:35:03.487587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.539 [2024-11-06 15:35:03.487594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.487603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.539 [2024-11-06 15:35:03.487611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.487621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.539 [2024-11-06 15:35:03.487628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.487637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.539 [2024-11-06 15:35:03.487646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.487656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.539 [2024-11-06 15:35:03.487663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.487673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.539 [2024-11-06 15:35:03.487680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.487690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.539 [2024-11-06 15:35:03.487697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.487707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.539 [2024-11-06 15:35:03.487714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.487724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.539 [2024-11-06 15:35:03.487732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.487742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.539 [2024-11-06 15:35:03.487756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.487766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.539 [2024-11-06 15:35:03.487774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.487783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.539 [2024-11-06 15:35:03.487791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.487800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.539 [2024-11-06 15:35:03.487808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.487839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:45.539 [2024-11-06 15:35:03.488003] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:45.539 [2024-11-06 15:35:03.488061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e2b10 is same with the state(6) to be set 00:22:45.539 [2024-11-06 15:35:03.488162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161a730 is same with the state(6) to be set 00:22:45.539 [2024-11-06 15:35:03.488253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c61b0 is same with the state(6) to be set 00:22:45.539 [2024-11-06 15:35:03.488353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b9fd0 is same with the state(6) to be set 00:22:45.539 [2024-11-06 15:35:03.488443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dc610 is same with the state(6) to be set 00:22:45.539 [2024-11-06 15:35:03.488527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.539 [2024-11-06 15:35:03.488575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.539 [2024-11-06 15:35:03.488582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.488590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161a4f0 is same with the state(6) to be set 00:22:45.540 [2024-11-06 15:35:03.488614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.540 [2024-11-06 15:35:03.488624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.488633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.540 [2024-11-06 15:35:03.488640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.488648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.540 [2024-11-06 15:35:03.488656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.488664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.540 [2024-11-06 15:35:03.488672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.488679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16198d0 is same with the state(6) to be set 00:22:45.540 [2024-11-06 15:35:03.488706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.540 [2024-11-06 15:35:03.488715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.488723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.540 [2024-11-06 15:35:03.488730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.488739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.540 [2024-11-06 15:35:03.488752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.488761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.540 [2024-11-06 15:35:03.488768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.488775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b9dd0 is same with the state(6) to be set 00:22:45.540 [2024-11-06 15:35:03.488798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.540 [2024-11-06 15:35:03.488807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.488816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.540 [2024-11-06 15:35:03.488823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.488831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.540 [2024-11-06 15:35:03.488839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.488847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.540 [2024-11-06 15:35:03.488855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.488862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ed040 is same with the state(6) to be set 00:22:45.540 [2024-11-06 15:35:03.488888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.540 [2024-11-06 15:35:03.488897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.488905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.540 [2024-11-06 15:35:03.488913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.488921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.540 [2024-11-06 15:35:03.488928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.488936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.540 [2024-11-06 15:35:03.488944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.488951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5d30 is same with the state(6) to be set 00:22:45.540 [2024-11-06 15:35:03.489132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.540 [2024-11-06 15:35:03.489146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.489158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.540 [2024-11-06 15:35:03.489166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.489176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.540 [2024-11-06 15:35:03.489183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.489192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.540 [2024-11-06 15:35:03.489200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.489209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.540 [2024-11-06 15:35:03.489217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.489227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.540 [2024-11-06 15:35:03.489234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.489244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.540 [2024-11-06 15:35:03.489252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.489261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.540 [2024-11-06 15:35:03.489268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.489281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.540 [2024-11-06 15:35:03.489288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.489298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.540 [2024-11-06 15:35:03.489305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.489315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.540 [2024-11-06 15:35:03.489322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.489332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.540 [2024-11-06 15:35:03.489339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.489348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.540 [2024-11-06 15:35:03.489356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.489365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.540 [2024-11-06 15:35:03.489373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.489382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.540 [2024-11-06 15:35:03.489389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.489398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.540 [2024-11-06 15:35:03.489406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.540 [2024-11-06 15:35:03.489415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.489988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.489998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.490005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.490014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.490022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.490031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.490038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.490048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.490055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.490064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.490071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.541 [2024-11-06 15:35:03.490081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.541 [2024-11-06 15:35:03.490088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.490097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.490105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.490114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.490122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.490133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.490140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.490150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.490157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.490167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.490174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.490184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.490191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.490200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.490208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.490218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.490225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.491621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:45.542 [2024-11-06 15:35:03.491650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10dc610 (9): Bad file descriptor 00:22:45.542 [2024-11-06 15:35:03.493079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:45.542 [2024-11-06 15:35:03.493105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b9dd0 (9): Bad file descriptor 00:22:45.542 [2024-11-06 15:35:03.494017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.542 [2024-11-06 15:35:03.494041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10dc610 with addr=10.0.0.2, port=4420 00:22:45.542 [2024-11-06 15:35:03.494049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dc610 is same with the state(6) to be set 00:22:45.542 [2024-11-06 15:35:03.494509] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:45.542 [2024-11-06 15:35:03.494561] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:45.542 [2024-11-06 15:35:03.494597] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:45.542 [2024-11-06 15:35:03.494942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.542 [2024-11-06 15:35:03.494981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b9dd0 with addr=10.0.0.2, port=4420 00:22:45.542 [2024-11-06 15:35:03.494993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b9dd0 is same with the state(6) to be set 00:22:45.542 [2024-11-06 15:35:03.495011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10dc610 (9): Bad file descriptor 00:22:45.542 [2024-11-06 15:35:03.495053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.542 [2024-11-06 15:35:03.495508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.542 [2024-11-06 15:35:03.495515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.495983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.495991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.496000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.496007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.496017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.496024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.496034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.496042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.496052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.496059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.496069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.496076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.496086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.496093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.496103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.496110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.496120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.496127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.496137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.496145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.496154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.496162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.496171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.543 [2024-11-06 15:35:03.496180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.543 [2024-11-06 15:35:03.496280] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:45.543 [2024-11-06 15:35:03.496326] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:45.543 [2024-11-06 15:35:03.496401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b9dd0 (9): Bad file descriptor 00:22:45.543 [2024-11-06 15:35:03.496415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:45.543 [2024-11-06 15:35:03.496422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:45.544 [2024-11-06 15:35:03.496431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:45.544 [2024-11-06 15:35:03.496439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:45.544 [2024-11-06 15:35:03.497711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:45.544 [2024-11-06 15:35:03.497734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b9fd0 (9): Bad file descriptor 00:22:45.544 [2024-11-06 15:35:03.497754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:45.544 [2024-11-06 15:35:03.497762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:45.544 [2024-11-06 15:35:03.497772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:45.544 [2024-11-06 15:35:03.497780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:45.544 [2024-11-06 15:35:03.498514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.544 [2024-11-06 15:35:03.498530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b9fd0 with addr=10.0.0.2, port=4420 00:22:45.544 [2024-11-06 15:35:03.498538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b9fd0 is same with the state(6) to be set 00:22:45.544 [2024-11-06 15:35:03.498548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e2b10 (9): Bad file descriptor 00:22:45.544 [2024-11-06 15:35:03.498569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161a730 (9): Bad file descriptor 00:22:45.544 [2024-11-06 15:35:03.498588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c61b0 (9): Bad file descriptor 00:22:45.544 [2024-11-06 15:35:03.498609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161a4f0 (9): Bad file descriptor 00:22:45.544 [2024-11-06 15:35:03.498626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16198d0 (9): Bad file descriptor 00:22:45.544 [2024-11-06 15:35:03.498645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ed040 (9): Bad file descriptor 00:22:45.544 [2024-11-06 15:35:03.498663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c5d30 (9): Bad file descriptor 00:22:45.544 [2024-11-06 15:35:03.498751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b9fd0 (9): Bad file descriptor 00:22:45.544 [2024-11-06 15:35:03.498798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:45.544 [2024-11-06 15:35:03.498806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:45.544 [2024-11-06 15:35:03.498814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:45.544 [2024-11-06 15:35:03.498820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:45.544 [2024-11-06 15:35:03.503246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:45.544 [2024-11-06 15:35:03.503485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.544 [2024-11-06 15:35:03.503498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10dc610 with addr=10.0.0.2, port=4420 00:22:45.544 [2024-11-06 15:35:03.503507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dc610 is same with the state(6) to be set 00:22:45.544 [2024-11-06 15:35:03.503546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10dc610 (9): Bad file descriptor 00:22:45.544 [2024-11-06 15:35:03.503584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:45.544 [2024-11-06 15:35:03.503592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:45.544 [2024-11-06 15:35:03.503599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:45.544 [2024-11-06 15:35:03.503606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:45.544 [2024-11-06 15:35:03.504172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:45.544 [2024-11-06 15:35:03.504563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.544 [2024-11-06 15:35:03.504576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b9dd0 with addr=10.0.0.2, port=4420 00:22:45.544 [2024-11-06 15:35:03.504583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b9dd0 is same with the state(6) to be set 00:22:45.544 [2024-11-06 15:35:03.504622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b9dd0 (9): Bad file descriptor 00:22:45.544 [2024-11-06 15:35:03.504661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:45.544 [2024-11-06 15:35:03.504668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:45.544 [2024-11-06 15:35:03.504675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:45.544 [2024-11-06 15:35:03.504682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:45.813 [2024-11-06 15:35:03.507974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:45.813 [2024-11-06 15:35:03.508210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.813 [2024-11-06 15:35:03.508222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b9fd0 with addr=10.0.0.2, port=4420 00:22:45.813 [2024-11-06 15:35:03.508230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b9fd0 is same with the state(6) to be set 00:22:45.813 [2024-11-06 15:35:03.508269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b9fd0 (9): Bad file descriptor 00:22:45.813 [2024-11-06 15:35:03.508376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:45.813 [2024-11-06 15:35:03.508385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:45.813 [2024-11-06 15:35:03.508392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:45.813 [2024-11-06 15:35:03.508399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:45.813 [2024-11-06 15:35:03.508436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.813 [2024-11-06 15:35:03.508822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.813 [2024-11-06 15:35:03.508832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.508839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.508848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.508855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.508865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.508873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.508882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.508890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.508901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.508908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.508917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.508925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.508934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.508942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.508951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.508959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.508968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.508976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.508985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.508992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.814 [2024-11-06 15:35:03.509490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.814 [2024-11-06 15:35:03.509499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.509507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.509517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.509524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.509534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.509543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.509551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4430 is same with the state(6) to be set 00:22:45.815 [2024-11-06 15:35:03.510843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.510860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.510872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.510882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.510893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.510902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.510913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.510922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.510933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.510942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.510953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.510963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.510974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.510981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.510990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.510998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.511007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.511015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.511024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.511031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.511041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.511048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.511058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.511066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.511077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.511085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.511095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.511103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.511112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.511121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.511130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.511138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.511148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.511155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.511165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.511172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.511182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.511190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.511199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.511206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.511216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.511223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.511233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.511240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.511250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.511257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.511266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.511274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.511283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.511292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.511302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.511309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.511319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.511326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.511336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.511343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.815 [2024-11-06 15:35:03.511352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.815 [2024-11-06 15:35:03.511360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.511963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.511972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cb9e0 is same with the state(6) to be set 00:22:45.816 [2024-11-06 15:35:03.513255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.513268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.513281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.513291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.513302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.513311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.513322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.816 [2024-11-06 15:35:03.513330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.816 [2024-11-06 15:35:03.513339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.817 [2024-11-06 15:35:03.513877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.817 [2024-11-06 15:35:03.513885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.513894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.513901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.513911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.513918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.513927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.513935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.513944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.513951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.513961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.513968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.513977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.513984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.513994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.514357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.514366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c6990 is same with the state(6) to be set 00:22:45.818 [2024-11-06 15:35:03.515638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.515652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.515664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.515673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.515685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.515694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.818 [2024-11-06 15:35:03.515705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.818 [2024-11-06 15:35:03.515714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.515725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.515734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.515752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.515761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.515772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.515780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.515790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.515797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.515809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.515816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.515825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.515834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.515843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.515851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.515861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.515869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.515878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.515886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.515895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.515903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.515912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.515921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.515931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.515938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.515947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.515956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.515966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.515973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.515983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.515992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.516009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.516030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.516047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.516064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.516081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.516098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.516115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.516134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.516150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.516169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.516186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.516203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.516221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.516237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.516256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.516274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.516292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.516308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.516327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.516345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.819 [2024-11-06 15:35:03.516361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.819 [2024-11-06 15:35:03.516371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.516773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.516781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8ec0 is same with the state(6) to be set 00:22:45.820 [2024-11-06 15:35:03.518060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.518073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.518086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.518095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.518107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.518116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.518128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.518138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.518147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.518155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.518164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.518172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.518181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.518191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.518200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.518208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.518218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.518225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.820 [2024-11-06 15:35:03.518235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.820 [2024-11-06 15:35:03.518242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.821 [2024-11-06 15:35:03.518921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.821 [2024-11-06 15:35:03.518928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.518938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.518946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.518955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.518962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.518972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.518979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.518988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.518996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.519006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.519013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.519022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.519030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.519039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.519048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.519058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.519065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.519074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.519082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.519091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.519098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.519108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.519116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.519125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.519133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.519142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.519149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.519159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.519166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.519174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15cb500 is same with the state(6) to be set 00:22:45.822 [2024-11-06 15:35:03.520443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.520456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.520469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.520478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.520489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.520496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.520505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.520513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.520522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.520533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.520542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.520549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.520559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.520566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.520576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.520583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.520593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.520600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.520610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.520617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.520627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.520634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.520644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.520651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.520661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.520668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.520678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.520685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.520695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.520702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.520711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.520718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.520727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.520735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.520748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.520756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.822 [2024-11-06 15:35:03.520766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.822 [2024-11-06 15:35:03.520773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.520782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.520790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.520799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.520806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.520816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.520823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.520832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.520839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.520849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.520857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.520866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.520873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.520883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.520890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.520899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.520907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.520916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.520923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.520933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.520940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.520950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.520960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.520970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.520977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.520987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.520994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.823 [2024-11-06 15:35:03.521432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.823 [2024-11-06 15:35:03.521442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.521449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.521458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.521466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.521475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.521483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.521492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.521499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.521509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.521516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.521525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.521532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.521541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15cc7d0 is same with the state(6) to be set 00:22:45.824 [2024-11-06 15:35:03.522821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.522834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.522846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.522856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.522867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.522879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.522891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.522900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.522911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.522920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.522931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.522940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.522952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.522960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.522970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.522977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.522987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.522994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.824 [2024-11-06 15:35:03.523413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.824 [2024-11-06 15:35:03.523420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.523429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.523436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.523446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.523453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.523463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.523470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.523479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.523487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.523496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.523504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.523513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.523520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.523530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.523539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.523549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.523556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.523566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.523573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.523583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.523591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.523600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.523607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.523617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.523624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.528433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.528472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.528484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.528492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.528502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.528510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.528520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.528527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.528537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.528545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.528554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.528562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.528571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.528578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.528594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.528602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.528611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.528619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.528629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.528637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.528646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.528654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.528663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.528671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.528680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.528688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.528697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.528704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.528714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.528721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.528731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.528738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.528754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.528762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.528772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.825 [2024-11-06 15:35:03.528780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.825 [2024-11-06 15:35:03.528789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15cda90 is same with the state(6) to be set 00:22:45.825 [2024-11-06 15:35:03.530366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:45.825 [2024-11-06 15:35:03.530398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:45.825 [2024-11-06 15:35:03.530414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:45.825 [2024-11-06 15:35:03.530425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:45.825 [2024-11-06 15:35:03.530517] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:22:45.825 [2024-11-06 15:35:03.530534] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:22:45.825 [2024-11-06 15:35:03.530547] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:45.825 [2024-11-06 15:35:03.547372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:45.825 [2024-11-06 15:35:03.547397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:45.825 task offset: 29184 on job bdev=Nvme7n1 fails 00:22:45.825 00:22:45.825 Latency(us) 00:22:45.825 [2024-11-06T14:35:03.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.825 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.825 Job: Nvme1n1 ended in about 0.99 seconds with error 00:22:45.825 Verification LBA range: start 0x0 length 0x400 00:22:45.825 Nvme1n1 : 0.99 193.75 12.11 64.58 0.00 245014.83 22282.24 263891.63 00:22:45.825 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.825 Job: Nvme2n1 ended in about 0.98 seconds with error 00:22:45.825 Verification LBA range: start 0x0 length 0x400 00:22:45.825 Nvme2n1 : 0.98 196.34 12.27 65.45 0.00 236929.55 2635.09 249910.61 00:22:45.825 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.825 Job: Nvme3n1 ended in about 0.99 seconds with error 00:22:45.825 Verification LBA range: start 0x0 length 0x400 00:22:45.825 Nvme3n1 : 0.99 193.28 12.08 64.43 0.00 235898.88 19660.80 249910.61 00:22:45.825 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.825 Job: Nvme4n1 ended in about 1.00 seconds with error 00:22:45.825 Verification LBA range: start 0x0 length 0x400 00:22:45.826 Nvme4n1 : 1.00 192.82 12.05 64.27 0.00 231699.63 37792.43 225443.84 00:22:45.826 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.826 Job: Nvme5n1 ended in about 0.97 seconds with error 00:22:45.826 Verification LBA range: start 0x0 length 0x400 00:22:45.826 Nvme5n1 : 0.97 197.30 12.33 65.77 0.00 221193.49 5843.63 253405.87 00:22:45.826 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.826 Job: Nvme6n1 ended in about 1.00 seconds with error 00:22:45.826 Verification LBA range: start 0x0 length 0x400 00:22:45.826 Nvme6n1 : 1.00 128.24 8.01 64.12 0.00 296920.75 20097.71 270882.13 00:22:45.826 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.826 Job: Nvme7n1 ended in about 0.97 seconds with error 00:22:45.826 Verification LBA range: start 0x0 length 0x400 00:22:45.826 Nvme7n1 : 0.97 197.58 12.35 65.86 0.00 211163.57 4041.39 234181.97 00:22:45.826 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.826 Job: Nvme8n1 ended in about 1.00 seconds with error 00:22:45.826 Verification LBA range: start 0x0 length 0x400 00:22:45.826 Nvme8n1 : 1.00 191.90 11.99 63.97 0.00 213503.68 11141.12 256901.12 00:22:45.826 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.826 Job: Nvme9n1 ended in about 1.00 seconds with error 00:22:45.826 Verification LBA range: start 0x0 length 0x400 00:22:45.826 Nvme9n1 : 1.00 127.63 7.98 63.81 0.00 279082.10 21626.88 253405.87 00:22:45.826 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.826 Job: Nvme10n1 ended in about 1.01 seconds with error 00:22:45.826 Verification LBA range: start 0x0 length 0x400 00:22:45.826 Nvme10n1 : 1.01 126.71 7.92 63.36 0.00 275211.66 18350.08 272629.76 00:22:45.826 [2024-11-06T14:35:03.809Z] =================================================================================================================== 00:22:45.826 [2024-11-06T14:35:03.809Z] Total : 1745.55 109.10 645.61 0.00 241493.46 2635.09 272629.76 00:22:45.826 [2024-11-06 15:35:03.575646] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:45.826 [2024-11-06 15:35:03.575693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:45.826 [2024-11-06 15:35:03.576065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.826 [2024-11-06 15:35:03.576086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c61b0 with addr=10.0.0.2, port=4420 00:22:45.826 [2024-11-06 15:35:03.576097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c61b0 is same with the state(6) to be set 00:22:45.826 [2024-11-06 15:35:03.576182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.826 [2024-11-06 15:35:03.576192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c5d30 with addr=10.0.0.2, port=4420 00:22:45.826 [2024-11-06 15:35:03.576200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c5d30 is same with the state(6) to be set 00:22:45.826 [2024-11-06 15:35:03.576420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.826 [2024-11-06 15:35:03.576438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e2b10 with addr=10.0.0.2, port=4420 00:22:45.826 [2024-11-06 15:35:03.576446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e2b10 is same with the state(6) to be set 00:22:45.826 [2024-11-06 15:35:03.576662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.826 [2024-11-06 15:35:03.576672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ed040 with addr=10.0.0.2, port=4420 00:22:45.826 [2024-11-06 15:35:03.576679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ed040 is same with the state(6) to be set 00:22:45.826 [2024-11-06 15:35:03.576707] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:22:45.826 [2024-11-06 15:35:03.576719] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:22:45.826 [2024-11-06 15:35:03.576730] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:22:45.826 [2024-11-06 15:35:03.576754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ed040 (9): Bad file descriptor 00:22:45.826 [2024-11-06 15:35:03.576772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e2b10 (9): Bad file descriptor 00:22:45.826 [2024-11-06 15:35:03.576784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c5d30 (9): Bad file descriptor 00:22:45.826 [2024-11-06 15:35:03.576797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c61b0 (9): Bad file descriptor 00:22:45.826 1745.55 IOPS, 109.10 MiB/s [2024-11-06T14:35:03.809Z] [2024-11-06 15:35:03.578690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:45.826 [2024-11-06 15:35:03.578704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:45.826 [2024-11-06 15:35:03.578965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.826 [2024-11-06 15:35:03.578981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a4f0 with addr=10.0.0.2, port=4420 00:22:45.826 [2024-11-06 15:35:03.578988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161a4f0 is same with the state(6) to be set 00:22:45.826 [2024-11-06 15:35:03.579169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.826 [2024-11-06 15:35:03.579185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a730 with addr=10.0.0.2, port=4420 00:22:45.826 [2024-11-06 15:35:03.579193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161a730 is same with the state(6) to be set 00:22:45.826 [2024-11-06 15:35:03.579239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.826 [2024-11-06 15:35:03.579248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16198d0 with addr=10.0.0.2, port=4420 00:22:45.826 [2024-11-06 15:35:03.579255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16198d0 is same with the state(6) to be set 00:22:45.826 [2024-11-06 15:35:03.579281] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:22:45.826 [2024-11-06 15:35:03.579293] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:22:45.826 [2024-11-06 15:35:03.579305] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:22:45.826 [2024-11-06 15:35:03.579317] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:22:45.826 [2024-11-06 15:35:03.579327] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:22:45.826 [2024-11-06 15:35:03.579577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:45.826 [2024-11-06 15:35:03.579939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.826 [2024-11-06 15:35:03.579954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10dc610 with addr=10.0.0.2, port=4420 00:22:45.826 [2024-11-06 15:35:03.579962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dc610 is same with the state(6) to be set 00:22:45.826 [2024-11-06 15:35:03.580286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.826 [2024-11-06 15:35:03.580296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b9dd0 with addr=10.0.0.2, port=4420 00:22:45.826 [2024-11-06 15:35:03.580303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b9dd0 is same with the state(6) to be set 00:22:45.826 [2024-11-06 15:35:03.580313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161a4f0 (9): Bad file descriptor 00:22:45.826 [2024-11-06 15:35:03.580324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161a730 (9): Bad file descriptor 00:22:45.826 [2024-11-06 15:35:03.580333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16198d0 (9): Bad file descriptor 00:22:45.826 [2024-11-06 15:35:03.580343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:45.826 [2024-11-06 15:35:03.580350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:45.826 [2024-11-06 15:35:03.580359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:45.826 [2024-11-06 15:35:03.580368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:45.826 [2024-11-06 15:35:03.580376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:45.826 [2024-11-06 15:35:03.580383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:45.826 [2024-11-06 15:35:03.580390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:45.826 [2024-11-06 15:35:03.580396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:45.826 [2024-11-06 15:35:03.580407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:45.826 [2024-11-06 15:35:03.580414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:45.826 [2024-11-06 15:35:03.580421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:45.826 [2024-11-06 15:35:03.580427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:45.826 [2024-11-06 15:35:03.580436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:45.826 [2024-11-06 15:35:03.580442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:45.826 [2024-11-06 15:35:03.580449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:45.826 [2024-11-06 15:35:03.580455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:45.826 [2024-11-06 15:35:03.580775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.826 [2024-11-06 15:35:03.580788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b9fd0 with addr=10.0.0.2, port=4420 00:22:45.826 [2024-11-06 15:35:03.580795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b9fd0 is same with the state(6) to be set 00:22:45.826 [2024-11-06 15:35:03.580804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10dc610 (9): Bad file descriptor 00:22:45.826 [2024-11-06 15:35:03.580814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b9dd0 (9): Bad file descriptor 00:22:45.826 [2024-11-06 15:35:03.580822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:45.826 [2024-11-06 15:35:03.580829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:45.826 [2024-11-06 15:35:03.580837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:45.827 [2024-11-06 15:35:03.580843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:45.827 [2024-11-06 15:35:03.580851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:45.827 [2024-11-06 15:35:03.580857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:45.827 [2024-11-06 15:35:03.580864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:45.827 [2024-11-06 15:35:03.580870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:45.827 [2024-11-06 15:35:03.580877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:45.827 [2024-11-06 15:35:03.580883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:45.827 [2024-11-06 15:35:03.580890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:45.827 [2024-11-06 15:35:03.580897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:45.827 [2024-11-06 15:35:03.580924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b9fd0 (9): Bad file descriptor 00:22:45.827 [2024-11-06 15:35:03.580933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:45.827 [2024-11-06 15:35:03.580939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:45.827 [2024-11-06 15:35:03.580946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:45.827 [2024-11-06 15:35:03.580956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:45.827 [2024-11-06 15:35:03.580963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:45.827 [2024-11-06 15:35:03.580970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:45.827 [2024-11-06 15:35:03.580977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:45.827 [2024-11-06 15:35:03.580983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:45.827 [2024-11-06 15:35:03.581683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:45.827 [2024-11-06 15:35:03.581693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:45.827 [2024-11-06 15:35:03.581701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:45.827 [2024-11-06 15:35:03.581710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:45.827 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3844719 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3844719 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3844719 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:46.769 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.030 rmmod nvme_tcp 00:22:47.030 rmmod nvme_fabrics 00:22:47.030 rmmod nvme_keyring 00:22:47.030 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.030 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:47.030 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:47.030 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3844538 ']' 00:22:47.030 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3844538 00:22:47.030 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3844538 ']' 00:22:47.030 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3844538 00:22:47.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3844538) - No such process 00:22:47.030 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3844538 is not found' 00:22:47.030 Process with pid 3844538 is not found 00:22:47.030 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:47.030 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:47.030 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:47.030 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:47.030 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:47.030 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:47.030 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:47.030 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.030 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.030 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.030 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.030 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.942 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:48.942 00:22:48.942 real 0m7.363s 00:22:48.942 user 0m17.475s 00:22:48.942 sys 0m1.255s 00:22:48.942 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:48.942 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:48.942 ************************************ 00:22:48.942 END TEST nvmf_shutdown_tc3 00:22:48.942 ************************************ 00:22:49.203 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:49.203 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:49.203 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:49.203 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:49.203 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:49.203 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:49.203 ************************************ 00:22:49.203 START TEST nvmf_shutdown_tc4 00:22:49.203 ************************************ 00:22:49.203 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:22:49.203 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:49.203 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:49.203 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:49.204 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:49.204 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:49.204 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:49.204 Found net devices under 0000:31:00.0: cvl_0_0 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:49.204 Found net devices under 0000:31:00.1: cvl_0_1 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.204 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:49.205 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:49.205 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.205 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.205 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.205 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.465 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:49.465 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.465 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.465 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:49.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:22:49.466 00:22:49.466 --- 10.0.0.2 ping statistics --- 00:22:49.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.466 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:22:49.466 00:22:49.466 --- 10.0.0.1 ping statistics --- 00:22:49.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.466 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3846067 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3846067 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 3846067 ']' 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:49.466 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:49.726 [2024-11-06 15:35:07.456854] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:22:49.726 [2024-11-06 15:35:07.456924] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.726 [2024-11-06 15:35:07.553036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:49.726 [2024-11-06 15:35:07.587324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.726 [2024-11-06 15:35:07.587354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.726 [2024-11-06 15:35:07.587361] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.726 [2024-11-06 15:35:07.587365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.726 [2024-11-06 15:35:07.587370] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.726 [2024-11-06 15:35:07.588700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.726 [2024-11-06 15:35:07.588848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.726 [2024-11-06 15:35:07.588974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.726 [2024-11-06 15:35:07.588974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:50.297 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:50.297 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:22:50.297 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:50.297 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:50.297 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:50.557 [2024-11-06 15:35:08.301032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:50.557 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:50.558 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.558 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:50.558 Malloc1 00:22:50.558 [2024-11-06 15:35:08.415689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.558 Malloc2 00:22:50.558 Malloc3 00:22:50.558 Malloc4 00:22:50.818 Malloc5 00:22:50.818 Malloc6 00:22:50.818 Malloc7 00:22:50.818 Malloc8 00:22:50.818 Malloc9 00:22:50.818 Malloc10 00:22:50.818 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.818 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:50.818 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:50.818 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:51.078 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3846448 00:22:51.078 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:51.078 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:51.078 [2024-11-06 15:35:08.891047] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:56.370 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:56.370 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3846067 00:22:56.370 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3846067 ']' 00:22:56.370 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3846067 00:22:56.370 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:22:56.370 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:56.370 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3846067 00:22:56.370 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:56.370 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:56.370 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3846067' 00:22:56.370 killing process with pid 3846067 00:22:56.370 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 3846067 00:22:56.370 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 3846067 00:22:56.371 [2024-11-06 15:35:13.889524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16d0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.889570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b16d0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.889917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2070 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.889945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2070 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.889951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2070 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.889956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2070 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.889961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2070 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.889966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2070 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.889971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2070 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.890374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1200 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.890402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1200 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.890408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1200 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.890413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1200 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.893049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b40d0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.893080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b40d0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.893089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b40d0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.893096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b40d0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.893339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b45a0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.893356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b45a0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.893392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3730 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.893410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3730 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.893415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3730 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.893420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3730 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.893425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3730 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.893431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3730 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.894023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b28c0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.894041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b28c0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.894046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b28c0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.894051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b28c0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.894230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2d90 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.894248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2d90 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.894256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2d90 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.894263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2d90 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.894269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2d90 is same with the state(6) to be set 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 starting I/O failed: -6 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 starting I/O failed: -6 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 starting I/O failed: -6 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 starting I/O failed: -6 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 [2024-11-06 15:35:13.894595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3260 is same with Write completed with error (sct=0, sc=8) 00:22:56.371 the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.894612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3260 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.894618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3260 is same with the state(6) to be set 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 [2024-11-06 15:35:13.894622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3260 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.894627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b3260 is same with the state(6) to be set 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 starting I/O failed: -6 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 starting I/O failed: -6 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 starting I/O failed: -6 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 starting I/O failed: -6 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 starting I/O failed: -6 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 [2024-11-06 15:35:13.894984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b23f0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.894999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b23f0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.895004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b23f0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.895009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b23f0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.894979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:56.371 [2024-11-06 15:35:13.895014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b23f0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.895020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b23f0 is same with the state(6) to be set 00:22:56.371 NVMe io qpair process completion error 00:22:56.371 [2024-11-06 15:35:13.895026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b23f0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.895032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b23f0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.895037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b23f0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.895041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b23f0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.895548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6840 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.895562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6840 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.895809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6be0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.895827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6be0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.895834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6be0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.895841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6be0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.896020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14686a0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.896035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14686a0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.896040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14686a0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.896044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14686a0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.896049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14686a0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.896054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14686a0 is same with the state(6) to be set 00:22:56.371 [2024-11-06 15:35:13.896058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14686a0 is same with the state(6) to be set 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.371 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 [2024-11-06 15:35:13.896832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 [2024-11-06 15:35:13.897635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.372 Write completed with error (sct=0, sc=8) 00:22:56.372 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 [2024-11-06 15:35:13.899971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:56.373 NVMe io qpair process completion error 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 [2024-11-06 15:35:13.901165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 Write completed with error (sct=0, sc=8) 00:22:56.373 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 [2024-11-06 15:35:13.901968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 [2024-11-06 15:35:13.902888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.374 Write completed with error (sct=0, sc=8) 00:22:56.374 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 [2024-11-06 15:35:13.904552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:56.375 NVMe io qpair process completion error 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 [2024-11-06 15:35:13.905715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 [2024-11-06 15:35:13.906539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.375 starting I/O failed: -6 00:22:56.375 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 [2024-11-06 15:35:13.907450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.376 Write completed with error (sct=0, sc=8) 00:22:56.376 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 [2024-11-06 15:35:13.910166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:56.377 NVMe io qpair process completion error 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 [2024-11-06 15:35:13.911472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 [2024-11-06 15:35:13.912402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 [2024-11-06 15:35:13.913324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:56.377 Write completed with error (sct=0, sc=8) 00:22:56.377 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 [2024-11-06 15:35:13.914968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:56.378 NVMe io qpair process completion error 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 [2024-11-06 15:35:13.915913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.378 Write completed with error (sct=0, sc=8) 00:22:56.378 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 [2024-11-06 15:35:13.916719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 [2024-11-06 15:35:13.917655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.379 starting I/O failed: -6 00:22:56.379 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 [2024-11-06 15:35:13.920211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:56.380 NVMe io qpair process completion error 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 [2024-11-06 15:35:13.921308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 [2024-11-06 15:35:13.922123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:56.380 starting I/O failed: -6 00:22:56.380 starting I/O failed: -6 00:22:56.380 starting I/O failed: -6 00:22:56.380 starting I/O failed: -6 00:22:56.380 starting I/O failed: -6 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.380 Write completed with error (sct=0, sc=8) 00:22:56.380 starting I/O failed: -6 00:22:56.381 [2024-11-06 15:35:13.923514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 [2024-11-06 15:35:13.925333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:56.381 NVMe io qpair process completion error 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 starting I/O failed: -6 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.381 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 [2024-11-06 15:35:13.926591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 [2024-11-06 15:35:13.927417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 [2024-11-06 15:35:13.928340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.382 starting I/O failed: -6 00:22:56.382 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 [2024-11-06 15:35:13.929769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:56.383 NVMe io qpair process completion error 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 [2024-11-06 15:35:13.931559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.383 starting I/O failed: -6 00:22:56.383 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 [2024-11-06 15:35:13.932503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 [2024-11-06 15:35:13.935544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:56.384 NVMe io qpair process completion error 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 starting I/O failed: -6 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.384 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 [2024-11-06 15:35:13.936855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 [2024-11-06 15:35:13.937663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.385 starting I/O failed: -6 00:22:56.385 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 [2024-11-06 15:35:13.938594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 starting I/O failed: -6 00:22:56.386 [2024-11-06 15:35:13.940211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:56.386 NVMe io qpair process completion error 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.386 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Write completed with error (sct=0, sc=8) 00:22:56.387 Initializing NVMe Controllers 00:22:56.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:56.387 Controller IO queue size 128, less than required. 00:22:56.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:56.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:56.387 Controller IO queue size 128, less than required. 00:22:56.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:56.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:56.387 Controller IO queue size 128, less than required. 00:22:56.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:56.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:56.387 Controller IO queue size 128, less than required. 00:22:56.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:56.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:56.387 Controller IO queue size 128, less than required. 00:22:56.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:56.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:56.387 Controller IO queue size 128, less than required. 00:22:56.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:56.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:56.387 Controller IO queue size 128, less than required. 00:22:56.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:56.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:56.387 Controller IO queue size 128, less than required. 00:22:56.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:56.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:56.387 Controller IO queue size 128, less than required. 00:22:56.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:56.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:56.387 Controller IO queue size 128, less than required. 00:22:56.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:56.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:56.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:56.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:56.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:56.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:56.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:56.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:56.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:56.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:56.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:56.387 Initialization complete. Launching workers. 00:22:56.387 ======================================================== 00:22:56.387 Latency(us) 00:22:56.387 Device Information : IOPS MiB/s Average min max 00:22:56.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1839.12 79.02 69626.73 783.61 117307.62 00:22:56.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1930.80 82.96 66337.86 822.26 146446.66 00:22:56.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1955.47 84.02 65524.05 674.70 117567.83 00:22:56.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1901.77 81.72 67413.33 567.14 116626.72 00:22:56.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1953.94 83.96 65631.59 665.77 116627.26 00:22:56.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1872.74 80.47 68516.57 712.00 124521.11 00:22:56.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1910.72 82.10 67180.70 841.14 117708.02 00:22:56.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1922.51 82.61 66784.54 892.33 128304.74 00:22:56.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1903.95 81.81 67479.58 686.36 117089.08 00:22:56.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1885.18 81.00 68093.56 988.30 132891.76 00:22:56.387 ======================================================== 00:22:56.387 Total : 19076.20 819.68 67237.17 567.14 146446.66 00:22:56.387 00:22:56.387 [2024-11-06 15:35:13.947233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf29f0 is same with the state(6) to be set 00:22:56.387 [2024-11-06 15:35:13.947278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf3380 is same with the state(6) to be set 00:22:56.387 [2024-11-06 15:35:13.947309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf39e0 is same with the state(6) to be set 00:22:56.387 [2024-11-06 15:35:13.947339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf4360 is same with the state(6) to be set 00:22:56.387 [2024-11-06 15:35:13.947367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf36b0 is same with the state(6) to be set 00:22:56.387 [2024-11-06 15:35:13.947396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf2060 is same with the state(6) to be set 00:22:56.387 [2024-11-06 15:35:13.947424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf3050 is same with the state(6) to be set 00:22:56.387 [2024-11-06 15:35:13.947452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf4540 is same with the state(6) to be set 00:22:56.387 [2024-11-06 15:35:13.947480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf26c0 is same with the state(6) to be set 00:22:56.387 [2024-11-06 15:35:13.947508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf2390 is same with the state(6) to be set 00:22:56.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:56.387 15:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3846448 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3846448 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3846448 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:57.330 rmmod nvme_tcp 00:22:57.330 rmmod nvme_fabrics 00:22:57.330 rmmod nvme_keyring 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3846067 ']' 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3846067 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3846067 ']' 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3846067 00:22:57.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3846067) - No such process 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3846067 is not found' 00:22:57.330 Process with pid 3846067 is not found 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.330 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:59.876 00:22:59.876 real 0m10.322s 00:22:59.876 user 0m28.041s 00:22:59.876 sys 0m3.888s 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:59.876 ************************************ 00:22:59.876 END TEST nvmf_shutdown_tc4 00:22:59.876 ************************************ 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:59.876 00:22:59.876 real 0m42.898s 00:22:59.876 user 1m43.472s 00:22:59.876 sys 0m13.457s 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:59.876 ************************************ 00:22:59.876 END TEST nvmf_shutdown 00:22:59.876 ************************************ 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:59.876 ************************************ 00:22:59.876 START TEST nvmf_nsid 00:22:59.876 ************************************ 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:59.876 * Looking for test storage... 00:22:59.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:59.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.876 --rc genhtml_branch_coverage=1 00:22:59.876 --rc genhtml_function_coverage=1 00:22:59.876 --rc genhtml_legend=1 00:22:59.876 --rc geninfo_all_blocks=1 00:22:59.876 --rc geninfo_unexecuted_blocks=1 00:22:59.876 00:22:59.876 ' 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:59.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.876 --rc genhtml_branch_coverage=1 00:22:59.876 --rc genhtml_function_coverage=1 00:22:59.876 --rc genhtml_legend=1 00:22:59.876 --rc geninfo_all_blocks=1 00:22:59.876 --rc geninfo_unexecuted_blocks=1 00:22:59.876 00:22:59.876 ' 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:59.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.876 --rc genhtml_branch_coverage=1 00:22:59.876 --rc genhtml_function_coverage=1 00:22:59.876 --rc genhtml_legend=1 00:22:59.876 --rc geninfo_all_blocks=1 00:22:59.876 --rc geninfo_unexecuted_blocks=1 00:22:59.876 00:22:59.876 ' 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:59.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.876 --rc genhtml_branch_coverage=1 00:22:59.876 --rc genhtml_function_coverage=1 00:22:59.876 --rc genhtml_legend=1 00:22:59.876 --rc geninfo_all_blocks=1 00:22:59.876 --rc geninfo_unexecuted_blocks=1 00:22:59.876 00:22:59.876 ' 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.876 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:59.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:59.877 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:08.018 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:08.018 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:08.018 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:08.018 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:08.018 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:08.018 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:08.018 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:08.018 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:08.018 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:08.018 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:08.018 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:08.018 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:08.018 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:08.019 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:08.019 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:08.019 Found net devices under 0000:31:00.0: cvl_0_0 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:08.019 Found net devices under 0000:31:00.1: cvl_0_1 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:08.019 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:08.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:08.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:23:08.019 00:23:08.019 --- 10.0.0.2 ping statistics --- 00:23:08.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.019 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:08.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:08.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:23:08.019 00:23:08.019 --- 10.0.0.1 ping statistics --- 00:23:08.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.019 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3851834 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3851834 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:08.019 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3851834 ']' 00:23:08.020 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.020 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:08.020 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.020 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:08.020 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:08.020 [2024-11-06 15:35:25.318953] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:23:08.020 [2024-11-06 15:35:25.319021] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.020 [2024-11-06 15:35:25.419800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.020 [2024-11-06 15:35:25.471578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.020 [2024-11-06 15:35:25.471627] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.020 [2024-11-06 15:35:25.471636] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.020 [2024-11-06 15:35:25.471643] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.020 [2024-11-06 15:35:25.471649] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.020 [2024-11-06 15:35:25.472409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3852177 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=aac239f6-ef76-4871-919d-3cb86927d343 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=ac196a3d-03fc-4df8-915e-fd7750c64ce5 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=658f2ed6-d77b-4219-bb32-bdb136fd9f72 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.282 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:08.282 null0 00:23:08.282 null1 00:23:08.282 [2024-11-06 15:35:26.234336] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:23:08.282 [2024-11-06 15:35:26.234411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852177 ] 00:23:08.282 null2 00:23:08.282 [2024-11-06 15:35:26.240236] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.544 [2024-11-06 15:35:26.264488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.544 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.544 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3852177 /var/tmp/tgt2.sock 00:23:08.544 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3852177 ']' 00:23:08.544 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:08.544 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:08.544 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:08.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:08.544 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:08.544 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:08.544 [2024-11-06 15:35:26.330188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.544 [2024-11-06 15:35:26.383676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.804 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:08.804 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:23:08.804 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:09.066 [2024-11-06 15:35:26.939189] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.066 [2024-11-06 15:35:26.955369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:09.066 nvme0n1 nvme0n2 00:23:09.066 nvme1n1 00:23:09.066 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:09.066 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:09.066 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:10.544 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:10.544 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:10.544 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:10.544 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:10.544 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:23:10.544 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:10.544 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:10.544 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:10.544 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:10.544 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:10.544 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:23:10.544 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:23:10.544 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid aac239f6-ef76-4871-919d-3cb86927d343 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=aac239f6ef764871919d3cb86927d343 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo AAC239F6EF764871919D3CB86927D343 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ AAC239F6EF764871919D3CB86927D343 == \A\A\C\2\3\9\F\6\E\F\7\6\4\8\7\1\9\1\9\D\3\C\B\8\6\9\2\7\D\3\4\3 ]] 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid ac196a3d-03fc-4df8-915e-fd7750c64ce5 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ac196a3d03fc4df8915efd7750c64ce5 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo AC196A3D03FC4DF8915EFD7750C64CE5 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ AC196A3D03FC4DF8915EFD7750C64CE5 == \A\C\1\9\6\A\3\D\0\3\F\C\4\D\F\8\9\1\5\E\F\D\7\7\5\0\C\6\4\C\E\5 ]] 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 658f2ed6-d77b-4219-bb32-bdb136fd9f72 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=658f2ed6d77b4219bb32bdb136fd9f72 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 658F2ED6D77B4219BB32BDB136FD9F72 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 658F2ED6D77B4219BB32BDB136FD9F72 == \6\5\8\F\2\E\D\6\D\7\7\B\4\2\1\9\B\B\3\2\B\D\B\1\3\6\F\D\9\F\7\2 ]] 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3852177 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3852177 ']' 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3852177 00:23:11.928 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:23:12.188 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:12.188 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3852177 00:23:12.188 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:12.188 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:12.188 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3852177' 00:23:12.188 killing process with pid 3852177 00:23:12.188 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3852177 00:23:12.188 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3852177 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:12.448 rmmod nvme_tcp 00:23:12.448 rmmod nvme_fabrics 00:23:12.448 rmmod nvme_keyring 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3851834 ']' 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3851834 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3851834 ']' 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3851834 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3851834 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3851834' 00:23:12.448 killing process with pid 3851834 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3851834 00:23:12.448 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3851834 00:23:12.708 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:12.708 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:12.708 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:12.708 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:12.708 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:12.708 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:12.708 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:12.708 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:12.708 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:12.708 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.708 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.708 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.620 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:14.620 00:23:14.620 real 0m15.075s 00:23:14.620 user 0m11.440s 00:23:14.620 sys 0m6.994s 00:23:14.620 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:14.620 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:14.620 ************************************ 00:23:14.620 END TEST nvmf_nsid 00:23:14.620 ************************************ 00:23:14.620 15:35:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:14.621 00:23:14.621 real 13m11.768s 00:23:14.621 user 27m35.251s 00:23:14.621 sys 3m56.211s 00:23:14.621 15:35:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:14.621 15:35:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:14.621 ************************************ 00:23:14.621 END TEST nvmf_target_extra 00:23:14.621 ************************************ 00:23:14.621 15:35:32 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:14.621 15:35:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:14.621 15:35:32 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:14.621 15:35:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:14.882 ************************************ 00:23:14.882 START TEST nvmf_host 00:23:14.882 ************************************ 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:14.882 * Looking for test storage... 00:23:14.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:14.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.882 --rc genhtml_branch_coverage=1 00:23:14.882 --rc genhtml_function_coverage=1 00:23:14.882 --rc genhtml_legend=1 00:23:14.882 --rc geninfo_all_blocks=1 00:23:14.882 --rc geninfo_unexecuted_blocks=1 00:23:14.882 00:23:14.882 ' 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:14.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.882 --rc genhtml_branch_coverage=1 00:23:14.882 --rc genhtml_function_coverage=1 00:23:14.882 --rc genhtml_legend=1 00:23:14.882 --rc geninfo_all_blocks=1 00:23:14.882 --rc geninfo_unexecuted_blocks=1 00:23:14.882 00:23:14.882 ' 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:14.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.882 --rc genhtml_branch_coverage=1 00:23:14.882 --rc genhtml_function_coverage=1 00:23:14.882 --rc genhtml_legend=1 00:23:14.882 --rc geninfo_all_blocks=1 00:23:14.882 --rc geninfo_unexecuted_blocks=1 00:23:14.882 00:23:14.882 ' 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:14.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.882 --rc genhtml_branch_coverage=1 00:23:14.882 --rc genhtml_function_coverage=1 00:23:14.882 --rc genhtml_legend=1 00:23:14.882 --rc geninfo_all_blocks=1 00:23:14.882 --rc geninfo_unexecuted_blocks=1 00:23:14.882 00:23:14.882 ' 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:14.882 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:15.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.145 ************************************ 00:23:15.145 START TEST nvmf_multicontroller 00:23:15.145 ************************************ 00:23:15.145 15:35:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:15.145 * Looking for test storage... 00:23:15.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:15.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.145 --rc genhtml_branch_coverage=1 00:23:15.145 --rc genhtml_function_coverage=1 00:23:15.145 --rc genhtml_legend=1 00:23:15.145 --rc geninfo_all_blocks=1 00:23:15.145 --rc geninfo_unexecuted_blocks=1 00:23:15.145 00:23:15.145 ' 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:15.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.145 --rc genhtml_branch_coverage=1 00:23:15.145 --rc genhtml_function_coverage=1 00:23:15.145 --rc genhtml_legend=1 00:23:15.145 --rc geninfo_all_blocks=1 00:23:15.145 --rc geninfo_unexecuted_blocks=1 00:23:15.145 00:23:15.145 ' 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:15.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.145 --rc genhtml_branch_coverage=1 00:23:15.145 --rc genhtml_function_coverage=1 00:23:15.145 --rc genhtml_legend=1 00:23:15.145 --rc geninfo_all_blocks=1 00:23:15.145 --rc geninfo_unexecuted_blocks=1 00:23:15.145 00:23:15.145 ' 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:15.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.145 --rc genhtml_branch_coverage=1 00:23:15.145 --rc genhtml_function_coverage=1 00:23:15.145 --rc genhtml_legend=1 00:23:15.145 --rc geninfo_all_blocks=1 00:23:15.145 --rc geninfo_unexecuted_blocks=1 00:23:15.145 00:23:15.145 ' 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.145 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.407 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.407 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:15.407 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:15.407 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.407 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.407 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.407 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.407 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.407 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:15.407 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.407 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.407 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.407 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:15.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:15.408 15:35:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.553 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.553 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:23.553 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:23.553 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:23.553 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:23.553 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:23.553 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:23.553 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:23.553 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:23.553 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:23.553 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:23.553 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:23.553 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:23.553 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:23.553 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:23.553 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.553 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.553 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.553 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.553 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:23.554 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:23.554 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:23.554 Found net devices under 0000:31:00.0: cvl_0_0 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:23.554 Found net devices under 0000:31:00.1: cvl_0_1 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:23.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:23:23.554 00:23:23.554 --- 10.0.0.2 ping statistics --- 00:23:23.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.554 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:23:23.554 00:23:23.554 --- 10.0.0.1 ping statistics --- 00:23:23.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.554 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3857321 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3857321 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3857321 ']' 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:23.554 15:35:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.555 [2024-11-06 15:35:40.857517] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:23:23.555 [2024-11-06 15:35:40.857587] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.555 [2024-11-06 15:35:40.960500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:23.555 [2024-11-06 15:35:41.013560] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.555 [2024-11-06 15:35:41.013610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.555 [2024-11-06 15:35:41.013619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.555 [2024-11-06 15:35:41.013626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.555 [2024-11-06 15:35:41.013633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.555 [2024-11-06 15:35:41.015539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.555 [2024-11-06 15:35:41.015700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.555 [2024-11-06 15:35:41.015700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:23.816 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:23.816 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:23:23.816 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:23.816 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:23.816 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.816 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.816 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:23.816 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.816 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.816 [2024-11-06 15:35:41.736824] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.816 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.816 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:23.816 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.816 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.816 Malloc0 00:23:23.816 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.816 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:23.816 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.816 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.816 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.816 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.078 [2024-11-06 15:35:41.816242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.078 [2024-11-06 15:35:41.828149] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.078 Malloc1 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3857384 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3857384 /var/tmp/bdevperf.sock 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3857384 ']' 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:24.078 15:35:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.022 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:25.022 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:23:25.022 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:25.022 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.022 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.283 NVMe0n1 00:23:25.283 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.283 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:25.283 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:25.283 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.283 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.283 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.283 1 00:23:25.283 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:25.283 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:25.283 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:25.283 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:25.283 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.283 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:25.283 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.283 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.284 request: 00:23:25.284 { 00:23:25.284 "name": "NVMe0", 00:23:25.284 "trtype": "tcp", 00:23:25.284 "traddr": "10.0.0.2", 00:23:25.284 "adrfam": "ipv4", 00:23:25.284 "trsvcid": "4420", 00:23:25.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.284 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:25.284 "hostaddr": "10.0.0.1", 00:23:25.284 "prchk_reftag": false, 00:23:25.284 "prchk_guard": false, 00:23:25.284 "hdgst": false, 00:23:25.284 "ddgst": false, 00:23:25.284 "allow_unrecognized_csi": false, 00:23:25.284 "method": "bdev_nvme_attach_controller", 00:23:25.284 "req_id": 1 00:23:25.284 } 00:23:25.284 Got JSON-RPC error response 00:23:25.284 response: 00:23:25.284 { 00:23:25.284 "code": -114, 00:23:25.284 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:25.284 } 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.284 request: 00:23:25.284 { 00:23:25.284 "name": "NVMe0", 00:23:25.284 "trtype": "tcp", 00:23:25.284 "traddr": "10.0.0.2", 00:23:25.284 "adrfam": "ipv4", 00:23:25.284 "trsvcid": "4420", 00:23:25.284 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:25.284 "hostaddr": "10.0.0.1", 00:23:25.284 "prchk_reftag": false, 00:23:25.284 "prchk_guard": false, 00:23:25.284 "hdgst": false, 00:23:25.284 "ddgst": false, 00:23:25.284 "allow_unrecognized_csi": false, 00:23:25.284 "method": "bdev_nvme_attach_controller", 00:23:25.284 "req_id": 1 00:23:25.284 } 00:23:25.284 Got JSON-RPC error response 00:23:25.284 response: 00:23:25.284 { 00:23:25.284 "code": -114, 00:23:25.284 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:25.284 } 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.284 request: 00:23:25.284 { 00:23:25.284 "name": "NVMe0", 00:23:25.284 "trtype": "tcp", 00:23:25.284 "traddr": "10.0.0.2", 00:23:25.284 "adrfam": "ipv4", 00:23:25.284 "trsvcid": "4420", 00:23:25.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.284 "hostaddr": "10.0.0.1", 00:23:25.284 "prchk_reftag": false, 00:23:25.284 "prchk_guard": false, 00:23:25.284 "hdgst": false, 00:23:25.284 "ddgst": false, 00:23:25.284 "multipath": "disable", 00:23:25.284 "allow_unrecognized_csi": false, 00:23:25.284 "method": "bdev_nvme_attach_controller", 00:23:25.284 "req_id": 1 00:23:25.284 } 00:23:25.284 Got JSON-RPC error response 00:23:25.284 response: 00:23:25.284 { 00:23:25.284 "code": -114, 00:23:25.284 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:25.284 } 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.284 request: 00:23:25.284 { 00:23:25.284 "name": "NVMe0", 00:23:25.284 "trtype": "tcp", 00:23:25.284 "traddr": "10.0.0.2", 00:23:25.284 "adrfam": "ipv4", 00:23:25.284 "trsvcid": "4420", 00:23:25.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.284 "hostaddr": "10.0.0.1", 00:23:25.284 "prchk_reftag": false, 00:23:25.284 "prchk_guard": false, 00:23:25.284 "hdgst": false, 00:23:25.284 "ddgst": false, 00:23:25.284 "multipath": "failover", 00:23:25.284 "allow_unrecognized_csi": false, 00:23:25.284 "method": "bdev_nvme_attach_controller", 00:23:25.284 "req_id": 1 00:23:25.284 } 00:23:25.284 Got JSON-RPC error response 00:23:25.284 response: 00:23:25.284 { 00:23:25.284 "code": -114, 00:23:25.284 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:25.284 } 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.284 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.545 NVMe0n1 00:23:25.545 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.545 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:25.545 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.545 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.545 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.545 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:25.545 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.545 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.545 00:23:25.545 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.545 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:25.545 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:25.545 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.545 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.545 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.545 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:25.545 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:26.928 { 00:23:26.928 "results": [ 00:23:26.928 { 00:23:26.928 "job": "NVMe0n1", 00:23:26.928 "core_mask": "0x1", 00:23:26.928 "workload": "write", 00:23:26.928 "status": "finished", 00:23:26.928 "queue_depth": 128, 00:23:26.928 "io_size": 4096, 00:23:26.928 "runtime": 1.005811, 00:23:26.928 "iops": 28658.465656072563, 00:23:26.928 "mibps": 111.94713146903345, 00:23:26.928 "io_failed": 0, 00:23:26.928 "io_timeout": 0, 00:23:26.928 "avg_latency_us": 4455.269526684013, 00:23:26.928 "min_latency_us": 2129.92, 00:23:26.928 "max_latency_us": 15291.733333333334 00:23:26.928 } 00:23:26.928 ], 00:23:26.928 "core_count": 1 00:23:26.928 } 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3857384 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3857384 ']' 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3857384 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3857384 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3857384' 00:23:26.928 killing process with pid 3857384 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3857384 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3857384 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.928 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:26.929 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:26.929 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:26.929 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:26.929 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:23:26.929 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:23:26.929 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:26.929 [2024-11-06 15:35:41.956271] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:23:26.929 [2024-11-06 15:35:41.956350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3857384 ] 00:23:26.929 [2024-11-06 15:35:42.052280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.929 [2024-11-06 15:35:42.105388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.929 [2024-11-06 15:35:43.453803] bdev.c:4897:bdev_name_add: *ERROR*: Bdev name 7592797a-2222-46a4-8873-7eff832756ae already exists 00:23:26.929 [2024-11-06 15:35:43.453847] bdev.c:8106:bdev_register: *ERROR*: Unable to add uuid:7592797a-2222-46a4-8873-7eff832756ae alias for bdev NVMe1n1 00:23:26.929 [2024-11-06 15:35:43.453857] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:26.929 Running I/O for 1 seconds... 00:23:26.929 28632.00 IOPS, 111.84 MiB/s 00:23:26.929 Latency(us) 00:23:26.929 [2024-11-06T14:35:44.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.929 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:26.929 NVMe0n1 : 1.01 28658.47 111.95 0.00 0.00 4455.27 2129.92 15291.73 00:23:26.929 [2024-11-06T14:35:44.912Z] =================================================================================================================== 00:23:26.929 [2024-11-06T14:35:44.912Z] Total : 28658.47 111.95 0.00 0.00 4455.27 2129.92 15291.73 00:23:26.929 Received shutdown signal, test time was about 1.000000 seconds 00:23:26.929 00:23:26.929 Latency(us) 00:23:26.929 [2024-11-06T14:35:44.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.929 [2024-11-06T14:35:44.912Z] =================================================================================================================== 00:23:26.929 [2024-11-06T14:35:44.912Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:26.929 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:26.929 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:26.929 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:26.929 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:26.929 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:26.929 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:26.929 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:26.929 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:26.929 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:26.929 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:26.929 rmmod nvme_tcp 00:23:26.929 rmmod nvme_fabrics 00:23:27.189 rmmod nvme_keyring 00:23:27.189 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:27.189 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:27.189 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:27.189 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3857321 ']' 00:23:27.189 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3857321 00:23:27.189 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3857321 ']' 00:23:27.189 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3857321 00:23:27.189 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:23:27.189 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:27.189 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3857321 00:23:27.189 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:27.189 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:27.189 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3857321' 00:23:27.189 killing process with pid 3857321 00:23:27.189 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3857321 00:23:27.189 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3857321 00:23:27.189 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:27.189 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:27.189 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:27.189 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:27.189 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:27.189 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:27.189 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:27.189 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:27.189 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:27.189 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.189 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.189 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:29.734 00:23:29.734 real 0m14.331s 00:23:29.734 user 0m17.838s 00:23:29.734 sys 0m6.617s 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.734 ************************************ 00:23:29.734 END TEST nvmf_multicontroller 00:23:29.734 ************************************ 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.734 ************************************ 00:23:29.734 START TEST nvmf_aer 00:23:29.734 ************************************ 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:29.734 * Looking for test storage... 00:23:29.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.734 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:29.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.735 --rc genhtml_branch_coverage=1 00:23:29.735 --rc genhtml_function_coverage=1 00:23:29.735 --rc genhtml_legend=1 00:23:29.735 --rc geninfo_all_blocks=1 00:23:29.735 --rc geninfo_unexecuted_blocks=1 00:23:29.735 00:23:29.735 ' 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:29.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.735 --rc genhtml_branch_coverage=1 00:23:29.735 --rc genhtml_function_coverage=1 00:23:29.735 --rc genhtml_legend=1 00:23:29.735 --rc geninfo_all_blocks=1 00:23:29.735 --rc geninfo_unexecuted_blocks=1 00:23:29.735 00:23:29.735 ' 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:29.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.735 --rc genhtml_branch_coverage=1 00:23:29.735 --rc genhtml_function_coverage=1 00:23:29.735 --rc genhtml_legend=1 00:23:29.735 --rc geninfo_all_blocks=1 00:23:29.735 --rc geninfo_unexecuted_blocks=1 00:23:29.735 00:23:29.735 ' 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:29.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.735 --rc genhtml_branch_coverage=1 00:23:29.735 --rc genhtml_function_coverage=1 00:23:29.735 --rc genhtml_legend=1 00:23:29.735 --rc geninfo_all_blocks=1 00:23:29.735 --rc geninfo_unexecuted_blocks=1 00:23:29.735 00:23:29.735 ' 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:29.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:29.735 15:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:37.875 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:37.875 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:37.875 Found net devices under 0000:31:00.0: cvl_0_0 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:37.875 Found net devices under 0000:31:00.1: cvl_0_1 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:37.875 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:37.876 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:37.876 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.876 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:37.876 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:37.876 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:37.876 15:35:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:37.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:23:37.876 00:23:37.876 --- 10.0.0.2 ping statistics --- 00:23:37.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.876 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:37.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:23:37.876 00:23:37.876 --- 10.0.0.1 ping statistics --- 00:23:37.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.876 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3862395 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3862395 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 3862395 ']' 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:37.876 15:35:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.876 [2024-11-06 15:35:55.350970] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:23:37.876 [2024-11-06 15:35:55.351035] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.876 [2024-11-06 15:35:55.451581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:37.876 [2024-11-06 15:35:55.504589] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.876 [2024-11-06 15:35:55.504640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.876 [2024-11-06 15:35:55.504649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.876 [2024-11-06 15:35:55.504656] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.876 [2024-11-06 15:35:55.504663] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.876 [2024-11-06 15:35:55.507107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.876 [2024-11-06 15:35:55.507268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.876 [2024-11-06 15:35:55.507428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:37.876 [2024-11-06 15:35:55.507428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.447 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:38.447 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:23:38.447 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:38.447 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:38.447 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.447 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.448 [2024-11-06 15:35:56.225599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.448 Malloc0 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.448 [2024-11-06 15:35:56.298491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.448 [ 00:23:38.448 { 00:23:38.448 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:38.448 "subtype": "Discovery", 00:23:38.448 "listen_addresses": [], 00:23:38.448 "allow_any_host": true, 00:23:38.448 "hosts": [] 00:23:38.448 }, 00:23:38.448 { 00:23:38.448 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.448 "subtype": "NVMe", 00:23:38.448 "listen_addresses": [ 00:23:38.448 { 00:23:38.448 "trtype": "TCP", 00:23:38.448 "adrfam": "IPv4", 00:23:38.448 "traddr": "10.0.0.2", 00:23:38.448 "trsvcid": "4420" 00:23:38.448 } 00:23:38.448 ], 00:23:38.448 "allow_any_host": true, 00:23:38.448 "hosts": [], 00:23:38.448 "serial_number": "SPDK00000000000001", 00:23:38.448 "model_number": "SPDK bdev Controller", 00:23:38.448 "max_namespaces": 2, 00:23:38.448 "min_cntlid": 1, 00:23:38.448 "max_cntlid": 65519, 00:23:38.448 "namespaces": [ 00:23:38.448 { 00:23:38.448 "nsid": 1, 00:23:38.448 "bdev_name": "Malloc0", 00:23:38.448 "name": "Malloc0", 00:23:38.448 "nguid": "FF9633CE4BD446C0A18765159017B84F", 00:23:38.448 "uuid": "ff9633ce-4bd4-46c0-a187-65159017b84f" 00:23:38.448 } 00:23:38.448 ] 00:23:38.448 } 00:23:38.448 ] 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3862446 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:23:38.448 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:38.709 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:38.709 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:23:38.709 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:23:38.709 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:38.709 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:38.709 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 2 -lt 200 ']' 00:23:38.709 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=3 00:23:38.709 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:38.709 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:38.709 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 3 -lt 200 ']' 00:23:38.709 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=4 00:23:38.709 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.969 Malloc1 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.969 [ 00:23:38.969 { 00:23:38.969 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:38.969 "subtype": "Discovery", 00:23:38.969 "listen_addresses": [], 00:23:38.969 "allow_any_host": true, 00:23:38.969 "hosts": [] 00:23:38.969 }, 00:23:38.969 { 00:23:38.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.969 "subtype": "NVMe", 00:23:38.969 "listen_addresses": [ 00:23:38.969 { 00:23:38.969 "trtype": "TCP", 00:23:38.969 "adrfam": "IPv4", 00:23:38.969 "traddr": "10.0.0.2", 00:23:38.969 "trsvcid": "4420" 00:23:38.969 } 00:23:38.969 ], 00:23:38.969 "allow_any_host": true, 00:23:38.969 "hosts": [], 00:23:38.969 "serial_number": "SPDK00000000000001", 00:23:38.969 "model_number": "SPDK bdev Controller", 00:23:38.969 "max_namespaces": 2, 00:23:38.969 "min_cntlid": 1, 00:23:38.969 "max_cntlid": 65519, 00:23:38.969 "namespaces": [ 00:23:38.969 { 00:23:38.969 "nsid": 1, 00:23:38.969 "bdev_name": "Malloc0", 00:23:38.969 "name": "Malloc0", 00:23:38.969 "nguid": "FF9633CE4BD446C0A18765159017B84F", 00:23:38.969 "uuid": "ff9633ce-4bd4-46c0-a187-65159017b84f" 00:23:38.969 }, 00:23:38.969 { 00:23:38.969 "nsid": 2, 00:23:38.969 "bdev_name": "Malloc1", 00:23:38.969 "name": "Malloc1", 00:23:38.969 "nguid": "536ED969CA954043B65BF6212F23FCEF", 00:23:38.969 "uuid": "536ed969-ca95-4043-b65b-f6212f23fcef" 00:23:38.969 } 00:23:38.969 ] 00:23:38.969 } 00:23:38.969 ] 00:23:38.969 Asynchronous Event Request test 00:23:38.969 Attaching to 10.0.0.2 00:23:38.969 Attached to 10.0.0.2 00:23:38.969 Registering asynchronous event callbacks... 00:23:38.969 Starting namespace attribute notice tests for all controllers... 00:23:38.969 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:38.969 aer_cb - Changed Namespace 00:23:38.969 Cleaning up... 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3862446 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:38.969 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:38.969 rmmod nvme_tcp 00:23:38.969 rmmod nvme_fabrics 00:23:38.969 rmmod nvme_keyring 00:23:39.229 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:39.229 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:39.229 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:39.229 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3862395 ']' 00:23:39.229 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3862395 00:23:39.229 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 3862395 ']' 00:23:39.229 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 3862395 00:23:39.229 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:23:39.229 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:39.229 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3862395 00:23:39.229 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:39.230 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:39.230 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3862395' 00:23:39.230 killing process with pid 3862395 00:23:39.230 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 3862395 00:23:39.230 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 3862395 00:23:39.230 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:39.230 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:39.230 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:39.230 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:39.490 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:39.490 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:39.490 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:39.490 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:39.490 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:39.490 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.490 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.490 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.404 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:41.404 00:23:41.404 real 0m11.963s 00:23:41.404 user 0m9.046s 00:23:41.404 sys 0m6.391s 00:23:41.404 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:41.404 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.404 ************************************ 00:23:41.404 END TEST nvmf_aer 00:23:41.404 ************************************ 00:23:41.404 15:35:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:41.404 15:35:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:41.404 15:35:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:41.404 15:35:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.404 ************************************ 00:23:41.404 START TEST nvmf_async_init 00:23:41.404 ************************************ 00:23:41.404 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:41.665 * Looking for test storage... 00:23:41.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:41.665 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:41.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.666 --rc genhtml_branch_coverage=1 00:23:41.666 --rc genhtml_function_coverage=1 00:23:41.666 --rc genhtml_legend=1 00:23:41.666 --rc geninfo_all_blocks=1 00:23:41.666 --rc geninfo_unexecuted_blocks=1 00:23:41.666 00:23:41.666 ' 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:41.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.666 --rc genhtml_branch_coverage=1 00:23:41.666 --rc genhtml_function_coverage=1 00:23:41.666 --rc genhtml_legend=1 00:23:41.666 --rc geninfo_all_blocks=1 00:23:41.666 --rc geninfo_unexecuted_blocks=1 00:23:41.666 00:23:41.666 ' 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:41.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.666 --rc genhtml_branch_coverage=1 00:23:41.666 --rc genhtml_function_coverage=1 00:23:41.666 --rc genhtml_legend=1 00:23:41.666 --rc geninfo_all_blocks=1 00:23:41.666 --rc geninfo_unexecuted_blocks=1 00:23:41.666 00:23:41.666 ' 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:41.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.666 --rc genhtml_branch_coverage=1 00:23:41.666 --rc genhtml_function_coverage=1 00:23:41.666 --rc genhtml_legend=1 00:23:41.666 --rc geninfo_all_blocks=1 00:23:41.666 --rc geninfo_unexecuted_blocks=1 00:23:41.666 00:23:41.666 ' 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:41.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ca7e75c8f689457fa15597e725a92ca8 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:41.666 15:35:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:49.806 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:49.806 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:49.806 Found net devices under 0000:31:00.0: cvl_0_0 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:49.806 Found net devices under 0000:31:00.1: cvl_0_1 00:23:49.806 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.807 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:49.807 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:49.807 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:49.807 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:49.807 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:49.807 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:49.807 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:49.807 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:49.807 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:49.807 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:49.807 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:49.807 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:49.807 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:49.807 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:49.807 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:49.807 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.807 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:49.807 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:49.807 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:49.807 15:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:49.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:49.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:23:49.807 00:23:49.807 --- 10.0.0.2 ping statistics --- 00:23:49.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.807 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:49.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:49.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:23:49.807 00:23:49.807 --- 10.0.0.1 ping statistics --- 00:23:49.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.807 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3866955 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3866955 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 3866955 ']' 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:49.807 15:36:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:49.807 [2024-11-06 15:36:07.396208] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:23:49.807 [2024-11-06 15:36:07.396274] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.807 [2024-11-06 15:36:07.495179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.807 [2024-11-06 15:36:07.546530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.807 [2024-11-06 15:36:07.546581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.807 [2024-11-06 15:36:07.546590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.807 [2024-11-06 15:36:07.546598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.807 [2024-11-06 15:36:07.546604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.807 [2024-11-06 15:36:07.547385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.379 [2024-11-06 15:36:08.275176] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.379 null0 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ca7e75c8f689457fa15597e725a92ca8 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.379 [2024-11-06 15:36:08.335545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.379 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.640 nvme0n1 00:23:50.640 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.640 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:50.640 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.640 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.640 [ 00:23:50.640 { 00:23:50.640 "name": "nvme0n1", 00:23:50.640 "aliases": [ 00:23:50.640 "ca7e75c8-f689-457f-a155-97e725a92ca8" 00:23:50.640 ], 00:23:50.640 "product_name": "NVMe disk", 00:23:50.640 "block_size": 512, 00:23:50.640 "num_blocks": 2097152, 00:23:50.640 "uuid": "ca7e75c8-f689-457f-a155-97e725a92ca8", 00:23:50.640 "numa_id": 0, 00:23:50.640 "assigned_rate_limits": { 00:23:50.640 "rw_ios_per_sec": 0, 00:23:50.640 "rw_mbytes_per_sec": 0, 00:23:50.640 "r_mbytes_per_sec": 0, 00:23:50.640 "w_mbytes_per_sec": 0 00:23:50.640 }, 00:23:50.640 "claimed": false, 00:23:50.640 "zoned": false, 00:23:50.640 "supported_io_types": { 00:23:50.640 "read": true, 00:23:50.640 "write": true, 00:23:50.640 "unmap": false, 00:23:50.640 "flush": true, 00:23:50.640 "reset": true, 00:23:50.640 "nvme_admin": true, 00:23:50.640 "nvme_io": true, 00:23:50.640 "nvme_io_md": false, 00:23:50.640 "write_zeroes": true, 00:23:50.640 "zcopy": false, 00:23:50.640 "get_zone_info": false, 00:23:50.640 "zone_management": false, 00:23:50.640 "zone_append": false, 00:23:50.640 "compare": true, 00:23:50.640 "compare_and_write": true, 00:23:50.640 "abort": true, 00:23:50.640 "seek_hole": false, 00:23:50.640 "seek_data": false, 00:23:50.640 "copy": true, 00:23:50.640 "nvme_iov_md": false 00:23:50.640 }, 00:23:50.640 "memory_domains": [ 00:23:50.640 { 00:23:50.640 "dma_device_id": "system", 00:23:50.640 "dma_device_type": 1 00:23:50.640 } 00:23:50.640 ], 00:23:50.640 "driver_specific": { 00:23:50.640 "nvme": [ 00:23:50.640 { 00:23:50.640 "trid": { 00:23:50.640 "trtype": "TCP", 00:23:50.640 "adrfam": "IPv4", 00:23:50.640 "traddr": "10.0.0.2", 00:23:50.640 "trsvcid": "4420", 00:23:50.640 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:50.640 }, 00:23:50.640 "ctrlr_data": { 00:23:50.640 "cntlid": 1, 00:23:50.640 "vendor_id": "0x8086", 00:23:50.640 "model_number": "SPDK bdev Controller", 00:23:50.640 "serial_number": "00000000000000000000", 00:23:50.640 "firmware_revision": "25.01", 00:23:50.640 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:50.640 "oacs": { 00:23:50.640 "security": 0, 00:23:50.640 "format": 0, 00:23:50.640 "firmware": 0, 00:23:50.640 "ns_manage": 0 00:23:50.640 }, 00:23:50.640 "multi_ctrlr": true, 00:23:50.640 "ana_reporting": false 00:23:50.640 }, 00:23:50.640 "vs": { 00:23:50.640 "nvme_version": "1.3" 00:23:50.640 }, 00:23:50.640 "ns_data": { 00:23:50.640 "id": 1, 00:23:50.640 "can_share": true 00:23:50.640 } 00:23:50.640 } 00:23:50.640 ], 00:23:50.640 "mp_policy": "active_passive" 00:23:50.640 } 00:23:50.640 } 00:23:50.640 ] 00:23:50.640 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.640 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:50.640 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.640 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.640 [2024-11-06 15:36:08.612032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:50.640 [2024-11-06 15:36:08.612127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc664a0 (9): Bad file descriptor 00:23:50.900 [2024-11-06 15:36:08.743865] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:50.900 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.900 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:50.900 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.900 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.901 [ 00:23:50.901 { 00:23:50.901 "name": "nvme0n1", 00:23:50.901 "aliases": [ 00:23:50.901 "ca7e75c8-f689-457f-a155-97e725a92ca8" 00:23:50.901 ], 00:23:50.901 "product_name": "NVMe disk", 00:23:50.901 "block_size": 512, 00:23:50.901 "num_blocks": 2097152, 00:23:50.901 "uuid": "ca7e75c8-f689-457f-a155-97e725a92ca8", 00:23:50.901 "numa_id": 0, 00:23:50.901 "assigned_rate_limits": { 00:23:50.901 "rw_ios_per_sec": 0, 00:23:50.901 "rw_mbytes_per_sec": 0, 00:23:50.901 "r_mbytes_per_sec": 0, 00:23:50.901 "w_mbytes_per_sec": 0 00:23:50.901 }, 00:23:50.901 "claimed": false, 00:23:50.901 "zoned": false, 00:23:50.901 "supported_io_types": { 00:23:50.901 "read": true, 00:23:50.901 "write": true, 00:23:50.901 "unmap": false, 00:23:50.901 "flush": true, 00:23:50.901 "reset": true, 00:23:50.901 "nvme_admin": true, 00:23:50.901 "nvme_io": true, 00:23:50.901 "nvme_io_md": false, 00:23:50.901 "write_zeroes": true, 00:23:50.901 "zcopy": false, 00:23:50.901 "get_zone_info": false, 00:23:50.901 "zone_management": false, 00:23:50.901 "zone_append": false, 00:23:50.901 "compare": true, 00:23:50.901 "compare_and_write": true, 00:23:50.901 "abort": true, 00:23:50.901 "seek_hole": false, 00:23:50.901 "seek_data": false, 00:23:50.901 "copy": true, 00:23:50.901 "nvme_iov_md": false 00:23:50.901 }, 00:23:50.901 "memory_domains": [ 00:23:50.901 { 00:23:50.901 "dma_device_id": "system", 00:23:50.901 "dma_device_type": 1 00:23:50.901 } 00:23:50.901 ], 00:23:50.901 "driver_specific": { 00:23:50.901 "nvme": [ 00:23:50.901 { 00:23:50.901 "trid": { 00:23:50.901 "trtype": "TCP", 00:23:50.901 "adrfam": "IPv4", 00:23:50.901 "traddr": "10.0.0.2", 00:23:50.901 "trsvcid": "4420", 00:23:50.901 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:50.901 }, 00:23:50.901 "ctrlr_data": { 00:23:50.901 "cntlid": 2, 00:23:50.901 "vendor_id": "0x8086", 00:23:50.901 "model_number": "SPDK bdev Controller", 00:23:50.901 "serial_number": "00000000000000000000", 00:23:50.901 "firmware_revision": "25.01", 00:23:50.901 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:50.901 "oacs": { 00:23:50.901 "security": 0, 00:23:50.901 "format": 0, 00:23:50.901 "firmware": 0, 00:23:50.901 "ns_manage": 0 00:23:50.901 }, 00:23:50.901 "multi_ctrlr": true, 00:23:50.901 "ana_reporting": false 00:23:50.901 }, 00:23:50.901 "vs": { 00:23:50.901 "nvme_version": "1.3" 00:23:50.901 }, 00:23:50.901 "ns_data": { 00:23:50.901 "id": 1, 00:23:50.901 "can_share": true 00:23:50.901 } 00:23:50.901 } 00:23:50.901 ], 00:23:50.901 "mp_policy": "active_passive" 00:23:50.901 } 00:23:50.901 } 00:23:50.901 ] 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.nFTTHsyqVz 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.nFTTHsyqVz 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.nFTTHsyqVz 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.901 [2024-11-06 15:36:08.832710] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:50.901 [2024-11-06 15:36:08.832888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.901 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.901 [2024-11-06 15:36:08.856799] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.162 nvme0n1 00:23:51.162 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.162 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:51.162 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.162 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.162 [ 00:23:51.162 { 00:23:51.162 "name": "nvme0n1", 00:23:51.162 "aliases": [ 00:23:51.162 "ca7e75c8-f689-457f-a155-97e725a92ca8" 00:23:51.162 ], 00:23:51.162 "product_name": "NVMe disk", 00:23:51.162 "block_size": 512, 00:23:51.162 "num_blocks": 2097152, 00:23:51.162 "uuid": "ca7e75c8-f689-457f-a155-97e725a92ca8", 00:23:51.162 "numa_id": 0, 00:23:51.162 "assigned_rate_limits": { 00:23:51.162 "rw_ios_per_sec": 0, 00:23:51.162 "rw_mbytes_per_sec": 0, 00:23:51.162 "r_mbytes_per_sec": 0, 00:23:51.162 "w_mbytes_per_sec": 0 00:23:51.162 }, 00:23:51.162 "claimed": false, 00:23:51.162 "zoned": false, 00:23:51.162 "supported_io_types": { 00:23:51.162 "read": true, 00:23:51.162 "write": true, 00:23:51.162 "unmap": false, 00:23:51.162 "flush": true, 00:23:51.162 "reset": true, 00:23:51.162 "nvme_admin": true, 00:23:51.162 "nvme_io": true, 00:23:51.162 "nvme_io_md": false, 00:23:51.162 "write_zeroes": true, 00:23:51.162 "zcopy": false, 00:23:51.162 "get_zone_info": false, 00:23:51.162 "zone_management": false, 00:23:51.162 "zone_append": false, 00:23:51.162 "compare": true, 00:23:51.162 "compare_and_write": true, 00:23:51.162 "abort": true, 00:23:51.162 "seek_hole": false, 00:23:51.162 "seek_data": false, 00:23:51.162 "copy": true, 00:23:51.162 "nvme_iov_md": false 00:23:51.162 }, 00:23:51.162 "memory_domains": [ 00:23:51.162 { 00:23:51.162 "dma_device_id": "system", 00:23:51.162 "dma_device_type": 1 00:23:51.162 } 00:23:51.162 ], 00:23:51.162 "driver_specific": { 00:23:51.163 "nvme": [ 00:23:51.163 { 00:23:51.163 "trid": { 00:23:51.163 "trtype": "TCP", 00:23:51.163 "adrfam": "IPv4", 00:23:51.163 "traddr": "10.0.0.2", 00:23:51.163 "trsvcid": "4421", 00:23:51.163 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:51.163 }, 00:23:51.163 "ctrlr_data": { 00:23:51.163 "cntlid": 3, 00:23:51.163 "vendor_id": "0x8086", 00:23:51.163 "model_number": "SPDK bdev Controller", 00:23:51.163 "serial_number": "00000000000000000000", 00:23:51.163 "firmware_revision": "25.01", 00:23:51.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:51.163 "oacs": { 00:23:51.163 "security": 0, 00:23:51.163 "format": 0, 00:23:51.163 "firmware": 0, 00:23:51.163 "ns_manage": 0 00:23:51.163 }, 00:23:51.163 "multi_ctrlr": true, 00:23:51.163 "ana_reporting": false 00:23:51.163 }, 00:23:51.163 "vs": { 00:23:51.163 "nvme_version": "1.3" 00:23:51.163 }, 00:23:51.163 "ns_data": { 00:23:51.163 "id": 1, 00:23:51.163 "can_share": true 00:23:51.163 } 00:23:51.163 } 00:23:51.163 ], 00:23:51.163 "mp_policy": "active_passive" 00:23:51.163 } 00:23:51.163 } 00:23:51.163 ] 00:23:51.163 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.163 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.163 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.163 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.163 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.163 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.nFTTHsyqVz 00:23:51.163 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:51.163 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:51.163 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:51.163 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:51.163 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:51.163 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:51.163 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:51.163 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:51.163 rmmod nvme_tcp 00:23:51.163 rmmod nvme_fabrics 00:23:51.163 rmmod nvme_keyring 00:23:51.163 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:51.163 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:51.163 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:51.163 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3866955 ']' 00:23:51.163 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3866955 00:23:51.163 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 3866955 ']' 00:23:51.163 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 3866955 00:23:51.163 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:23:51.163 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:51.163 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3866955 00:23:51.163 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:51.163 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:51.163 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3866955' 00:23:51.163 killing process with pid 3866955 00:23:51.163 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 3866955 00:23:51.163 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 3866955 00:23:51.424 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:51.424 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:51.424 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:51.424 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:51.424 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:51.424 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:51.424 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:51.424 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:51.424 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:51.424 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.424 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.424 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:53.970 00:23:53.970 real 0m11.971s 00:23:53.970 user 0m4.335s 00:23:53.970 sys 0m6.198s 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.970 ************************************ 00:23:53.970 END TEST nvmf_async_init 00:23:53.970 ************************************ 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.970 ************************************ 00:23:53.970 START TEST dma 00:23:53.970 ************************************ 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:53.970 * Looking for test storage... 00:23:53.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:53.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.970 --rc genhtml_branch_coverage=1 00:23:53.970 --rc genhtml_function_coverage=1 00:23:53.970 --rc genhtml_legend=1 00:23:53.970 --rc geninfo_all_blocks=1 00:23:53.970 --rc geninfo_unexecuted_blocks=1 00:23:53.970 00:23:53.970 ' 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:53.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.970 --rc genhtml_branch_coverage=1 00:23:53.970 --rc genhtml_function_coverage=1 00:23:53.970 --rc genhtml_legend=1 00:23:53.970 --rc geninfo_all_blocks=1 00:23:53.970 --rc geninfo_unexecuted_blocks=1 00:23:53.970 00:23:53.970 ' 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:53.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.970 --rc genhtml_branch_coverage=1 00:23:53.970 --rc genhtml_function_coverage=1 00:23:53.970 --rc genhtml_legend=1 00:23:53.970 --rc geninfo_all_blocks=1 00:23:53.970 --rc geninfo_unexecuted_blocks=1 00:23:53.970 00:23:53.970 ' 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:53.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.970 --rc genhtml_branch_coverage=1 00:23:53.970 --rc genhtml_function_coverage=1 00:23:53.970 --rc genhtml_legend=1 00:23:53.970 --rc geninfo_all_blocks=1 00:23:53.970 --rc geninfo_unexecuted_blocks=1 00:23:53.970 00:23:53.970 ' 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.970 15:36:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:53.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:53.971 00:23:53.971 real 0m0.239s 00:23:53.971 user 0m0.135s 00:23:53.971 sys 0m0.120s 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:53.971 ************************************ 00:23:53.971 END TEST dma 00:23:53.971 ************************************ 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.971 ************************************ 00:23:53.971 START TEST nvmf_identify 00:23:53.971 ************************************ 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:53.971 * Looking for test storage... 00:23:53.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.971 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:54.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.232 --rc genhtml_branch_coverage=1 00:23:54.232 --rc genhtml_function_coverage=1 00:23:54.232 --rc genhtml_legend=1 00:23:54.232 --rc geninfo_all_blocks=1 00:23:54.232 --rc geninfo_unexecuted_blocks=1 00:23:54.232 00:23:54.232 ' 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:54.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.232 --rc genhtml_branch_coverage=1 00:23:54.232 --rc genhtml_function_coverage=1 00:23:54.232 --rc genhtml_legend=1 00:23:54.232 --rc geninfo_all_blocks=1 00:23:54.232 --rc geninfo_unexecuted_blocks=1 00:23:54.232 00:23:54.232 ' 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:54.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.232 --rc genhtml_branch_coverage=1 00:23:54.232 --rc genhtml_function_coverage=1 00:23:54.232 --rc genhtml_legend=1 00:23:54.232 --rc geninfo_all_blocks=1 00:23:54.232 --rc geninfo_unexecuted_blocks=1 00:23:54.232 00:23:54.232 ' 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:54.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.232 --rc genhtml_branch_coverage=1 00:23:54.232 --rc genhtml_function_coverage=1 00:23:54.232 --rc genhtml_legend=1 00:23:54.232 --rc geninfo_all_blocks=1 00:23:54.232 --rc geninfo_unexecuted_blocks=1 00:23:54.232 00:23:54.232 ' 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:54.232 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:54.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.233 15:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.233 15:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:54.233 15:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:54.233 15:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:54.233 15:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.369 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:02.369 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:02.369 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:02.369 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:02.369 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:02.369 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:02.369 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:02.369 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:02.369 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:02.369 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:02.369 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:02.369 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:02.369 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:02.369 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:02.369 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:02.369 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.369 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.369 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.369 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:02.370 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:02.370 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:02.370 Found net devices under 0000:31:00.0: cvl_0_0 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:02.370 Found net devices under 0000:31:00.1: cvl_0_1 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:02.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:24:02.370 00:24:02.370 --- 10.0.0.2 ping statistics --- 00:24:02.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.370 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:02.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:24:02.370 00:24:02.370 --- 10.0.0.1 ping statistics --- 00:24:02.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.370 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3872122 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3872122 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 3872122 ']' 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:02.370 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.370 [2024-11-06 15:36:19.688882] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:24:02.370 [2024-11-06 15:36:19.688949] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.371 [2024-11-06 15:36:19.790419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:02.371 [2024-11-06 15:36:19.844183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.371 [2024-11-06 15:36:19.844232] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.371 [2024-11-06 15:36:19.844241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.371 [2024-11-06 15:36:19.844248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.371 [2024-11-06 15:36:19.844255] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.371 [2024-11-06 15:36:19.846663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.371 [2024-11-06 15:36:19.846824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.371 [2024-11-06 15:36:19.846884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:02.371 [2024-11-06 15:36:19.846885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.631 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:02.631 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:24:02.631 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:02.631 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.631 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.631 [2024-11-06 15:36:20.526000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.631 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.631 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:02.631 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:02.631 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.631 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:02.631 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.631 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.902 Malloc0 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.902 [2024-11-06 15:36:20.648494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.902 [ 00:24:02.902 { 00:24:02.902 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:02.902 "subtype": "Discovery", 00:24:02.902 "listen_addresses": [ 00:24:02.902 { 00:24:02.902 "trtype": "TCP", 00:24:02.902 "adrfam": "IPv4", 00:24:02.902 "traddr": "10.0.0.2", 00:24:02.902 "trsvcid": "4420" 00:24:02.902 } 00:24:02.902 ], 00:24:02.902 "allow_any_host": true, 00:24:02.902 "hosts": [] 00:24:02.902 }, 00:24:02.902 { 00:24:02.902 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.902 "subtype": "NVMe", 00:24:02.902 "listen_addresses": [ 00:24:02.902 { 00:24:02.902 "trtype": "TCP", 00:24:02.902 "adrfam": "IPv4", 00:24:02.902 "traddr": "10.0.0.2", 00:24:02.902 "trsvcid": "4420" 00:24:02.902 } 00:24:02.902 ], 00:24:02.902 "allow_any_host": true, 00:24:02.902 "hosts": [], 00:24:02.902 "serial_number": "SPDK00000000000001", 00:24:02.902 "model_number": "SPDK bdev Controller", 00:24:02.902 "max_namespaces": 32, 00:24:02.902 "min_cntlid": 1, 00:24:02.902 "max_cntlid": 65519, 00:24:02.902 "namespaces": [ 00:24:02.902 { 00:24:02.902 "nsid": 1, 00:24:02.902 "bdev_name": "Malloc0", 00:24:02.902 "name": "Malloc0", 00:24:02.902 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:02.902 "eui64": "ABCDEF0123456789", 00:24:02.902 "uuid": "2279d5a8-fa91-4301-87ae-6cc3b613ea3f" 00:24:02.902 } 00:24:02.902 ] 00:24:02.902 } 00:24:02.902 ] 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.902 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:02.902 [2024-11-06 15:36:20.713546] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:24:02.902 [2024-11-06 15:36:20.713591] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3872471 ] 00:24:02.902 [2024-11-06 15:36:20.769505] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:02.902 [2024-11-06 15:36:20.769580] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:02.902 [2024-11-06 15:36:20.769586] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:02.903 [2024-11-06 15:36:20.769606] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:02.903 [2024-11-06 15:36:20.769619] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:02.903 [2024-11-06 15:36:20.773180] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:02.903 [2024-11-06 15:36:20.773228] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xcfa550 0 00:24:02.903 [2024-11-06 15:36:20.780764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:02.903 [2024-11-06 15:36:20.780780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:02.903 [2024-11-06 15:36:20.780785] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:02.903 [2024-11-06 15:36:20.780789] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:02.903 [2024-11-06 15:36:20.780831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.903 [2024-11-06 15:36:20.780839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.903 [2024-11-06 15:36:20.780843] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcfa550) 00:24:02.903 [2024-11-06 15:36:20.780861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:02.903 [2024-11-06 15:36:20.780890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c100, cid 0, qid 0 00:24:02.903 [2024-11-06 15:36:20.787763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.903 [2024-11-06 15:36:20.787774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.903 [2024-11-06 15:36:20.787778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.903 [2024-11-06 15:36:20.787784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c100) on tqpair=0xcfa550 00:24:02.903 [2024-11-06 15:36:20.787799] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:02.903 [2024-11-06 15:36:20.787808] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:02.903 [2024-11-06 15:36:20.787813] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:02.903 [2024-11-06 15:36:20.787830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.903 [2024-11-06 15:36:20.787835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.903 [2024-11-06 15:36:20.787839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcfa550) 00:24:02.903 [2024-11-06 15:36:20.787848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.903 [2024-11-06 15:36:20.787866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c100, cid 0, qid 0 00:24:02.903 [2024-11-06 15:36:20.788077] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.903 [2024-11-06 15:36:20.788084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.903 [2024-11-06 15:36:20.788087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.903 [2024-11-06 15:36:20.788091] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c100) on tqpair=0xcfa550 00:24:02.903 [2024-11-06 15:36:20.788098] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:02.903 [2024-11-06 15:36:20.788105] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:02.903 [2024-11-06 15:36:20.788112] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.903 [2024-11-06 15:36:20.788116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.903 [2024-11-06 15:36:20.788120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcfa550) 00:24:02.903 [2024-11-06 15:36:20.788127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.903 [2024-11-06 15:36:20.788138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c100, cid 0, qid 0 00:24:02.903 [2024-11-06 15:36:20.788322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.903 [2024-11-06 15:36:20.788328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.903 [2024-11-06 15:36:20.788332] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.903 [2024-11-06 15:36:20.788336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c100) on tqpair=0xcfa550 00:24:02.903 [2024-11-06 15:36:20.788342] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:02.903 [2024-11-06 15:36:20.788350] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:02.903 [2024-11-06 15:36:20.788357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.903 [2024-11-06 15:36:20.788361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.903 [2024-11-06 15:36:20.788365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcfa550) 00:24:02.903 [2024-11-06 15:36:20.788371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.903 [2024-11-06 15:36:20.788386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c100, cid 0, qid 0 00:24:02.903 [2024-11-06 15:36:20.788564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.903 [2024-11-06 15:36:20.788571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.903 [2024-11-06 15:36:20.788574] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.903 [2024-11-06 15:36:20.788578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c100) on tqpair=0xcfa550 00:24:02.903 [2024-11-06 15:36:20.788584] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:02.903 [2024-11-06 15:36:20.788593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.903 [2024-11-06 15:36:20.788597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.903 [2024-11-06 15:36:20.788601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcfa550) 00:24:02.904 [2024-11-06 15:36:20.788608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.904 [2024-11-06 15:36:20.788618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c100, cid 0, qid 0 00:24:02.904 [2024-11-06 15:36:20.788836] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.904 [2024-11-06 15:36:20.788843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.904 [2024-11-06 15:36:20.788846] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.904 [2024-11-06 15:36:20.788850] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c100) on tqpair=0xcfa550 00:24:02.904 [2024-11-06 15:36:20.788855] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:02.904 [2024-11-06 15:36:20.788860] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:02.904 [2024-11-06 15:36:20.788868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:02.904 [2024-11-06 15:36:20.788978] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:02.904 [2024-11-06 15:36:20.788983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:02.904 [2024-11-06 15:36:20.788993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.904 [2024-11-06 15:36:20.788997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.904 [2024-11-06 15:36:20.789001] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcfa550) 00:24:02.904 [2024-11-06 15:36:20.789007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.904 [2024-11-06 15:36:20.789018] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c100, cid 0, qid 0 00:24:02.904 [2024-11-06 15:36:20.789243] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.904 [2024-11-06 15:36:20.789250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.904 [2024-11-06 15:36:20.789253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.904 [2024-11-06 15:36:20.789257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c100) on tqpair=0xcfa550 00:24:02.904 [2024-11-06 15:36:20.789262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:02.904 [2024-11-06 15:36:20.789271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.904 [2024-11-06 15:36:20.789275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.904 [2024-11-06 15:36:20.789278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcfa550) 00:24:02.904 [2024-11-06 15:36:20.789288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.904 [2024-11-06 15:36:20.789299] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c100, cid 0, qid 0 00:24:02.904 [2024-11-06 15:36:20.789514] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.904 [2024-11-06 15:36:20.789521] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.904 [2024-11-06 15:36:20.789524] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.904 [2024-11-06 15:36:20.789528] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c100) on tqpair=0xcfa550 00:24:02.904 [2024-11-06 15:36:20.789533] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:02.904 [2024-11-06 15:36:20.789538] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:02.904 [2024-11-06 15:36:20.789546] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:02.904 [2024-11-06 15:36:20.789557] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:02.904 [2024-11-06 15:36:20.789567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.904 [2024-11-06 15:36:20.789571] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcfa550) 00:24:02.904 [2024-11-06 15:36:20.789578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.904 [2024-11-06 15:36:20.789589] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c100, cid 0, qid 0 00:24:02.904 [2024-11-06 15:36:20.789813] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.904 [2024-11-06 15:36:20.789821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.904 [2024-11-06 15:36:20.789825] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.904 [2024-11-06 15:36:20.789830] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcfa550): datao=0, datal=4096, cccid=0 00:24:02.904 [2024-11-06 15:36:20.789834] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd5c100) on tqpair(0xcfa550): expected_datao=0, payload_size=4096 00:24:02.904 [2024-11-06 15:36:20.789839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.904 [2024-11-06 15:36:20.789855] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.904 [2024-11-06 15:36:20.789860] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.904 [2024-11-06 15:36:20.833755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.904 [2024-11-06 15:36:20.833768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.904 [2024-11-06 15:36:20.833771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.904 [2024-11-06 15:36:20.833776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c100) on tqpair=0xcfa550 00:24:02.904 [2024-11-06 15:36:20.833786] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:02.904 [2024-11-06 15:36:20.833792] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:02.904 [2024-11-06 15:36:20.833797] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:02.904 [2024-11-06 15:36:20.833808] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:02.904 [2024-11-06 15:36:20.833813] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:02.904 [2024-11-06 15:36:20.833819] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:02.904 [2024-11-06 15:36:20.833834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:02.904 [2024-11-06 15:36:20.833842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.904 [2024-11-06 15:36:20.833846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.904 [2024-11-06 15:36:20.833850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcfa550) 00:24:02.904 [2024-11-06 15:36:20.833859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:02.904 [2024-11-06 15:36:20.833873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c100, cid 0, qid 0 00:24:02.904 [2024-11-06 15:36:20.834058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.904 [2024-11-06 15:36:20.834065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.905 [2024-11-06 15:36:20.834068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.905 [2024-11-06 15:36:20.834072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c100) on tqpair=0xcfa550 00:24:02.905 [2024-11-06 15:36:20.834081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.905 [2024-11-06 15:36:20.834085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.905 [2024-11-06 15:36:20.834089] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcfa550) 00:24:02.905 [2024-11-06 15:36:20.834095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.905 [2024-11-06 15:36:20.834102] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.905 [2024-11-06 15:36:20.834105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.905 [2024-11-06 15:36:20.834109] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xcfa550) 00:24:02.905 [2024-11-06 15:36:20.834115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.905 [2024-11-06 15:36:20.834121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.905 [2024-11-06 15:36:20.834125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.905 [2024-11-06 15:36:20.834128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xcfa550) 00:24:02.905 [2024-11-06 15:36:20.834134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.905 [2024-11-06 15:36:20.834140] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.905 [2024-11-06 15:36:20.834144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.905 [2024-11-06 15:36:20.834147] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcfa550) 00:24:02.905 [2024-11-06 15:36:20.834153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.905 [2024-11-06 15:36:20.834158] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:02.905 [2024-11-06 15:36:20.834167] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:02.905 [2024-11-06 15:36:20.834174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.905 [2024-11-06 15:36:20.834177] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcfa550) 00:24:02.905 [2024-11-06 15:36:20.834184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.905 [2024-11-06 15:36:20.834196] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c100, cid 0, qid 0 00:24:02.905 [2024-11-06 15:36:20.834202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c280, cid 1, qid 0 00:24:02.905 [2024-11-06 15:36:20.834209] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c400, cid 2, qid 0 00:24:02.905 [2024-11-06 15:36:20.834214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c580, cid 3, qid 0 00:24:02.905 [2024-11-06 15:36:20.834219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c700, cid 4, qid 0 00:24:02.905 [2024-11-06 15:36:20.834467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.905 [2024-11-06 15:36:20.834473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.905 [2024-11-06 15:36:20.834477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.905 [2024-11-06 15:36:20.834481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c700) on tqpair=0xcfa550 00:24:02.905 [2024-11-06 15:36:20.834489] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:02.905 [2024-11-06 15:36:20.834495] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:02.905 [2024-11-06 15:36:20.834506] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.905 [2024-11-06 15:36:20.834511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcfa550) 00:24:02.905 [2024-11-06 15:36:20.834517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.905 [2024-11-06 15:36:20.834527] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c700, cid 4, qid 0 00:24:02.905 [2024-11-06 15:36:20.834740] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.905 [2024-11-06 15:36:20.834753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.905 [2024-11-06 15:36:20.834757] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.905 [2024-11-06 15:36:20.834761] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcfa550): datao=0, datal=4096, cccid=4 00:24:02.905 [2024-11-06 15:36:20.834766] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd5c700) on tqpair(0xcfa550): expected_datao=0, payload_size=4096 00:24:02.905 [2024-11-06 15:36:20.834770] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.905 [2024-11-06 15:36:20.834777] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.905 [2024-11-06 15:36:20.834781] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.905 [2024-11-06 15:36:20.834958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.905 [2024-11-06 15:36:20.834964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.905 [2024-11-06 15:36:20.834967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.905 [2024-11-06 15:36:20.834971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c700) on tqpair=0xcfa550 00:24:02.905 [2024-11-06 15:36:20.834986] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:02.905 [2024-11-06 15:36:20.835014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.905 [2024-11-06 15:36:20.835018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcfa550) 00:24:02.905 [2024-11-06 15:36:20.835025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.905 [2024-11-06 15:36:20.835032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.905 [2024-11-06 15:36:20.835036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.905 [2024-11-06 15:36:20.835039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcfa550) 00:24:02.905 [2024-11-06 15:36:20.835046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.905 [2024-11-06 15:36:20.835060] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c700, cid 4, qid 0 00:24:02.905 [2024-11-06 15:36:20.835068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c880, cid 5, qid 0 00:24:02.905 [2024-11-06 15:36:20.835302] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.905 [2024-11-06 15:36:20.835309] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.905 [2024-11-06 15:36:20.835312] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.905 [2024-11-06 15:36:20.835316] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcfa550): datao=0, datal=1024, cccid=4 00:24:02.906 [2024-11-06 15:36:20.835320] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd5c700) on tqpair(0xcfa550): expected_datao=0, payload_size=1024 00:24:02.906 [2024-11-06 15:36:20.835325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.906 [2024-11-06 15:36:20.835331] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.906 [2024-11-06 15:36:20.835335] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.906 [2024-11-06 15:36:20.835341] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.906 [2024-11-06 15:36:20.835347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.906 [2024-11-06 15:36:20.835350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.906 [2024-11-06 15:36:20.835354] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c880) on tqpair=0xcfa550 00:24:02.906 [2024-11-06 15:36:20.878756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.906 [2024-11-06 15:36:20.878769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.906 [2024-11-06 15:36:20.878773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.906 [2024-11-06 15:36:20.878777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c700) on tqpair=0xcfa550 00:24:02.906 [2024-11-06 15:36:20.878791] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.906 [2024-11-06 15:36:20.878796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcfa550) 00:24:02.906 [2024-11-06 15:36:20.878804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.906 [2024-11-06 15:36:20.878822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c700, cid 4, qid 0 00:24:02.906 [2024-11-06 15:36:20.879069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.906 [2024-11-06 15:36:20.879075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.906 [2024-11-06 15:36:20.879079] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.906 [2024-11-06 15:36:20.879083] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcfa550): datao=0, datal=3072, cccid=4 00:24:02.906 [2024-11-06 15:36:20.879087] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd5c700) on tqpair(0xcfa550): expected_datao=0, payload_size=3072 00:24:02.906 [2024-11-06 15:36:20.879092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.906 [2024-11-06 15:36:20.879108] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.906 [2024-11-06 15:36:20.879112] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.170 [2024-11-06 15:36:20.920760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.170 [2024-11-06 15:36:20.920776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.170 [2024-11-06 15:36:20.920780] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.170 [2024-11-06 15:36:20.920785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c700) on tqpair=0xcfa550 00:24:03.170 [2024-11-06 15:36:20.920798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.170 [2024-11-06 15:36:20.920802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcfa550) 00:24:03.170 [2024-11-06 15:36:20.920810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.170 [2024-11-06 15:36:20.920827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c700, cid 4, qid 0 00:24:03.170 [2024-11-06 15:36:20.921035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.170 [2024-11-06 15:36:20.921041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.170 [2024-11-06 15:36:20.921045] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.170 [2024-11-06 15:36:20.921049] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcfa550): datao=0, datal=8, cccid=4 00:24:03.170 [2024-11-06 15:36:20.921053] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd5c700) on tqpair(0xcfa550): expected_datao=0, payload_size=8 00:24:03.170 [2024-11-06 15:36:20.921057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.170 [2024-11-06 15:36:20.921064] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.170 [2024-11-06 15:36:20.921068] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.170 [2024-11-06 15:36:20.961950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.170 [2024-11-06 15:36:20.961960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.170 [2024-11-06 15:36:20.961964] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.170 [2024-11-06 15:36:20.961968] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c700) on tqpair=0xcfa550 00:24:03.170 ===================================================== 00:24:03.170 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:03.170 ===================================================== 00:24:03.170 Controller Capabilities/Features 00:24:03.170 ================================ 00:24:03.170 Vendor ID: 0000 00:24:03.170 Subsystem Vendor ID: 0000 00:24:03.170 Serial Number: .................... 00:24:03.170 Model Number: ........................................ 00:24:03.170 Firmware Version: 25.01 00:24:03.170 Recommended Arb Burst: 0 00:24:03.170 IEEE OUI Identifier: 00 00 00 00:24:03.170 Multi-path I/O 00:24:03.170 May have multiple subsystem ports: No 00:24:03.170 May have multiple controllers: No 00:24:03.170 Associated with SR-IOV VF: No 00:24:03.170 Max Data Transfer Size: 131072 00:24:03.170 Max Number of Namespaces: 0 00:24:03.170 Max Number of I/O Queues: 1024 00:24:03.170 NVMe Specification Version (VS): 1.3 00:24:03.170 NVMe Specification Version (Identify): 1.3 00:24:03.170 Maximum Queue Entries: 128 00:24:03.170 Contiguous Queues Required: Yes 00:24:03.170 Arbitration Mechanisms Supported 00:24:03.170 Weighted Round Robin: Not Supported 00:24:03.170 Vendor Specific: Not Supported 00:24:03.170 Reset Timeout: 15000 ms 00:24:03.170 Doorbell Stride: 4 bytes 00:24:03.170 NVM Subsystem Reset: Not Supported 00:24:03.170 Command Sets Supported 00:24:03.170 NVM Command Set: Supported 00:24:03.170 Boot Partition: Not Supported 00:24:03.170 Memory Page Size Minimum: 4096 bytes 00:24:03.170 Memory Page Size Maximum: 4096 bytes 00:24:03.170 Persistent Memory Region: Not Supported 00:24:03.170 Optional Asynchronous Events Supported 00:24:03.170 Namespace Attribute Notices: Not Supported 00:24:03.171 Firmware Activation Notices: Not Supported 00:24:03.171 ANA Change Notices: Not Supported 00:24:03.171 PLE Aggregate Log Change Notices: Not Supported 00:24:03.171 LBA Status Info Alert Notices: Not Supported 00:24:03.171 EGE Aggregate Log Change Notices: Not Supported 00:24:03.171 Normal NVM Subsystem Shutdown event: Not Supported 00:24:03.171 Zone Descriptor Change Notices: Not Supported 00:24:03.171 Discovery Log Change Notices: Supported 00:24:03.171 Controller Attributes 00:24:03.171 128-bit Host Identifier: Not Supported 00:24:03.171 Non-Operational Permissive Mode: Not Supported 00:24:03.171 NVM Sets: Not Supported 00:24:03.171 Read Recovery Levels: Not Supported 00:24:03.171 Endurance Groups: Not Supported 00:24:03.171 Predictable Latency Mode: Not Supported 00:24:03.171 Traffic Based Keep ALive: Not Supported 00:24:03.171 Namespace Granularity: Not Supported 00:24:03.171 SQ Associations: Not Supported 00:24:03.171 UUID List: Not Supported 00:24:03.171 Multi-Domain Subsystem: Not Supported 00:24:03.171 Fixed Capacity Management: Not Supported 00:24:03.171 Variable Capacity Management: Not Supported 00:24:03.171 Delete Endurance Group: Not Supported 00:24:03.171 Delete NVM Set: Not Supported 00:24:03.171 Extended LBA Formats Supported: Not Supported 00:24:03.171 Flexible Data Placement Supported: Not Supported 00:24:03.171 00:24:03.171 Controller Memory Buffer Support 00:24:03.171 ================================ 00:24:03.171 Supported: No 00:24:03.171 00:24:03.171 Persistent Memory Region Support 00:24:03.171 ================================ 00:24:03.171 Supported: No 00:24:03.171 00:24:03.171 Admin Command Set Attributes 00:24:03.171 ============================ 00:24:03.171 Security Send/Receive: Not Supported 00:24:03.171 Format NVM: Not Supported 00:24:03.171 Firmware Activate/Download: Not Supported 00:24:03.171 Namespace Management: Not Supported 00:24:03.171 Device Self-Test: Not Supported 00:24:03.171 Directives: Not Supported 00:24:03.171 NVMe-MI: Not Supported 00:24:03.171 Virtualization Management: Not Supported 00:24:03.171 Doorbell Buffer Config: Not Supported 00:24:03.171 Get LBA Status Capability: Not Supported 00:24:03.171 Command & Feature Lockdown Capability: Not Supported 00:24:03.171 Abort Command Limit: 1 00:24:03.171 Async Event Request Limit: 4 00:24:03.171 Number of Firmware Slots: N/A 00:24:03.171 Firmware Slot 1 Read-Only: N/A 00:24:03.171 Firmware Activation Without Reset: N/A 00:24:03.171 Multiple Update Detection Support: N/A 00:24:03.171 Firmware Update Granularity: No Information Provided 00:24:03.171 Per-Namespace SMART Log: No 00:24:03.171 Asymmetric Namespace Access Log Page: Not Supported 00:24:03.171 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:03.171 Command Effects Log Page: Not Supported 00:24:03.171 Get Log Page Extended Data: Supported 00:24:03.171 Telemetry Log Pages: Not Supported 00:24:03.171 Persistent Event Log Pages: Not Supported 00:24:03.171 Supported Log Pages Log Page: May Support 00:24:03.171 Commands Supported & Effects Log Page: Not Supported 00:24:03.171 Feature Identifiers & Effects Log Page:May Support 00:24:03.171 NVMe-MI Commands & Effects Log Page: May Support 00:24:03.171 Data Area 4 for Telemetry Log: Not Supported 00:24:03.171 Error Log Page Entries Supported: 128 00:24:03.171 Keep Alive: Not Supported 00:24:03.171 00:24:03.171 NVM Command Set Attributes 00:24:03.171 ========================== 00:24:03.171 Submission Queue Entry Size 00:24:03.171 Max: 1 00:24:03.171 Min: 1 00:24:03.171 Completion Queue Entry Size 00:24:03.171 Max: 1 00:24:03.171 Min: 1 00:24:03.171 Number of Namespaces: 0 00:24:03.171 Compare Command: Not Supported 00:24:03.171 Write Uncorrectable Command: Not Supported 00:24:03.171 Dataset Management Command: Not Supported 00:24:03.171 Write Zeroes Command: Not Supported 00:24:03.171 Set Features Save Field: Not Supported 00:24:03.171 Reservations: Not Supported 00:24:03.171 Timestamp: Not Supported 00:24:03.171 Copy: Not Supported 00:24:03.171 Volatile Write Cache: Not Present 00:24:03.171 Atomic Write Unit (Normal): 1 00:24:03.171 Atomic Write Unit (PFail): 1 00:24:03.171 Atomic Compare & Write Unit: 1 00:24:03.171 Fused Compare & Write: Supported 00:24:03.171 Scatter-Gather List 00:24:03.171 SGL Command Set: Supported 00:24:03.171 SGL Keyed: Supported 00:24:03.171 SGL Bit Bucket Descriptor: Not Supported 00:24:03.171 SGL Metadata Pointer: Not Supported 00:24:03.171 Oversized SGL: Not Supported 00:24:03.171 SGL Metadata Address: Not Supported 00:24:03.171 SGL Offset: Supported 00:24:03.171 Transport SGL Data Block: Not Supported 00:24:03.171 Replay Protected Memory Block: Not Supported 00:24:03.171 00:24:03.171 Firmware Slot Information 00:24:03.171 ========================= 00:24:03.171 Active slot: 0 00:24:03.171 00:24:03.171 00:24:03.171 Error Log 00:24:03.171 ========= 00:24:03.171 00:24:03.171 Active Namespaces 00:24:03.171 ================= 00:24:03.171 Discovery Log Page 00:24:03.171 ================== 00:24:03.171 Generation Counter: 2 00:24:03.171 Number of Records: 2 00:24:03.171 Record Format: 0 00:24:03.171 00:24:03.171 Discovery Log Entry 0 00:24:03.171 ---------------------- 00:24:03.171 Transport Type: 3 (TCP) 00:24:03.171 Address Family: 1 (IPv4) 00:24:03.171 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:03.171 Entry Flags: 00:24:03.171 Duplicate Returned Information: 1 00:24:03.171 Explicit Persistent Connection Support for Discovery: 1 00:24:03.171 Transport Requirements: 00:24:03.171 Secure Channel: Not Required 00:24:03.171 Port ID: 0 (0x0000) 00:24:03.171 Controller ID: 65535 (0xffff) 00:24:03.171 Admin Max SQ Size: 128 00:24:03.171 Transport Service Identifier: 4420 00:24:03.171 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:03.171 Transport Address: 10.0.0.2 00:24:03.171 Discovery Log Entry 1 00:24:03.171 ---------------------- 00:24:03.171 Transport Type: 3 (TCP) 00:24:03.171 Address Family: 1 (IPv4) 00:24:03.171 Subsystem Type: 2 (NVM Subsystem) 00:24:03.171 Entry Flags: 00:24:03.171 Duplicate Returned Information: 0 00:24:03.171 Explicit Persistent Connection Support for Discovery: 0 00:24:03.171 Transport Requirements: 00:24:03.171 Secure Channel: Not Required 00:24:03.171 Port ID: 0 (0x0000) 00:24:03.171 Controller ID: 65535 (0xffff) 00:24:03.171 Admin Max SQ Size: 128 00:24:03.171 Transport Service Identifier: 4420 00:24:03.171 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:03.171 Transport Address: 10.0.0.2 [2024-11-06 15:36:20.962078] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:03.171 [2024-11-06 15:36:20.962090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c100) on tqpair=0xcfa550 00:24:03.171 [2024-11-06 15:36:20.962097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.171 [2024-11-06 15:36:20.962103] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c280) on tqpair=0xcfa550 00:24:03.171 [2024-11-06 15:36:20.962108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.171 [2024-11-06 15:36:20.962113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c400) on tqpair=0xcfa550 00:24:03.171 [2024-11-06 15:36:20.962118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.171 [2024-11-06 15:36:20.962123] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c580) on tqpair=0xcfa550 00:24:03.171 [2024-11-06 15:36:20.962128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.171 [2024-11-06 15:36:20.962141] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.171 [2024-11-06 15:36:20.962145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.171 [2024-11-06 15:36:20.962149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcfa550) 00:24:03.171 [2024-11-06 15:36:20.962158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.171 [2024-11-06 15:36:20.962174] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c580, cid 3, qid 0 00:24:03.171 [2024-11-06 15:36:20.962442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.171 [2024-11-06 15:36:20.962450] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.171 [2024-11-06 15:36:20.962453] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.171 [2024-11-06 15:36:20.962457] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c580) on tqpair=0xcfa550 00:24:03.171 [2024-11-06 15:36:20.962465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.171 [2024-11-06 15:36:20.962469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.171 [2024-11-06 15:36:20.962472] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcfa550) 00:24:03.171 [2024-11-06 15:36:20.962479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.171 [2024-11-06 15:36:20.962492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c580, cid 3, qid 0 00:24:03.171 [2024-11-06 15:36:20.962743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.172 [2024-11-06 15:36:20.962757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.172 [2024-11-06 15:36:20.962761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.962764] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c580) on tqpair=0xcfa550 00:24:03.172 [2024-11-06 15:36:20.962770] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:03.172 [2024-11-06 15:36:20.962775] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:03.172 [2024-11-06 15:36:20.962784] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.962788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.962792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcfa550) 00:24:03.172 [2024-11-06 15:36:20.962799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.172 [2024-11-06 15:36:20.962809] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c580, cid 3, qid 0 00:24:03.172 [2024-11-06 15:36:20.963047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.172 [2024-11-06 15:36:20.963053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.172 [2024-11-06 15:36:20.963057] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.963060] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c580) on tqpair=0xcfa550 00:24:03.172 [2024-11-06 15:36:20.963072] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.963076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.963079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcfa550) 00:24:03.172 [2024-11-06 15:36:20.963086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.172 [2024-11-06 15:36:20.963096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c580, cid 3, qid 0 00:24:03.172 [2024-11-06 15:36:20.963296] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.172 [2024-11-06 15:36:20.963303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.172 [2024-11-06 15:36:20.963306] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.963310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c580) on tqpair=0xcfa550 00:24:03.172 [2024-11-06 15:36:20.963320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.963324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.963327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcfa550) 00:24:03.172 [2024-11-06 15:36:20.963334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.172 [2024-11-06 15:36:20.963344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c580, cid 3, qid 0 00:24:03.172 [2024-11-06 15:36:20.963550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.172 [2024-11-06 15:36:20.963556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.172 [2024-11-06 15:36:20.963560] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.963564] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c580) on tqpair=0xcfa550 00:24:03.172 [2024-11-06 15:36:20.963573] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.963577] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.963581] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcfa550) 00:24:03.172 [2024-11-06 15:36:20.963588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.172 [2024-11-06 15:36:20.963600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c580, cid 3, qid 0 00:24:03.172 [2024-11-06 15:36:20.963802] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.172 [2024-11-06 15:36:20.963809] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.172 [2024-11-06 15:36:20.963813] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.963816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c580) on tqpair=0xcfa550 00:24:03.172 [2024-11-06 15:36:20.963826] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.963830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.963834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcfa550) 00:24:03.172 [2024-11-06 15:36:20.963841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.172 [2024-11-06 15:36:20.963851] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c580, cid 3, qid 0 00:24:03.172 [2024-11-06 15:36:20.964054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.172 [2024-11-06 15:36:20.964060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.172 [2024-11-06 15:36:20.964063] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.964067] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c580) on tqpair=0xcfa550 00:24:03.172 [2024-11-06 15:36:20.964077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.964081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.964084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcfa550) 00:24:03.172 [2024-11-06 15:36:20.964091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.172 [2024-11-06 15:36:20.964101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c580, cid 3, qid 0 00:24:03.172 [2024-11-06 15:36:20.964314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.172 [2024-11-06 15:36:20.964320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.172 [2024-11-06 15:36:20.964324] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.964328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c580) on tqpair=0xcfa550 00:24:03.172 [2024-11-06 15:36:20.964338] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.964342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.964346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcfa550) 00:24:03.172 [2024-11-06 15:36:20.964352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.172 [2024-11-06 15:36:20.964363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c580, cid 3, qid 0 00:24:03.172 [2024-11-06 15:36:20.964558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.172 [2024-11-06 15:36:20.964564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.172 [2024-11-06 15:36:20.964568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.964571] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c580) on tqpair=0xcfa550 00:24:03.172 [2024-11-06 15:36:20.964581] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.964585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.964589] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcfa550) 00:24:03.172 [2024-11-06 15:36:20.964596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.172 [2024-11-06 15:36:20.964606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c580, cid 3, qid 0 00:24:03.172 [2024-11-06 15:36:20.968757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.172 [2024-11-06 15:36:20.968765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.172 [2024-11-06 15:36:20.968769] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.968773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c580) on tqpair=0xcfa550 00:24:03.172 [2024-11-06 15:36:20.968783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.968787] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.968791] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcfa550) 00:24:03.172 [2024-11-06 15:36:20.968798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.172 [2024-11-06 15:36:20.968809] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd5c580, cid 3, qid 0 00:24:03.172 [2024-11-06 15:36:20.969039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.172 [2024-11-06 15:36:20.969046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.172 [2024-11-06 15:36:20.969049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:20.969053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd5c580) on tqpair=0xcfa550 00:24:03.172 [2024-11-06 15:36:20.969061] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:24:03.172 00:24:03.172 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:03.172 [2024-11-06 15:36:21.019696] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:24:03.172 [2024-11-06 15:36:21.019777] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3872477 ] 00:24:03.172 [2024-11-06 15:36:21.088273] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:03.172 [2024-11-06 15:36:21.088344] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:03.172 [2024-11-06 15:36:21.088350] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:03.172 [2024-11-06 15:36:21.088366] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:03.172 [2024-11-06 15:36:21.088379] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:03.172 [2024-11-06 15:36:21.089038] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:03.172 [2024-11-06 15:36:21.089076] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc11550 0 00:24:03.172 [2024-11-06 15:36:21.102766] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:03.172 [2024-11-06 15:36:21.102781] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:03.172 [2024-11-06 15:36:21.102786] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:03.172 [2024-11-06 15:36:21.102790] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:03.172 [2024-11-06 15:36:21.102824] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.172 [2024-11-06 15:36:21.102830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.102834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc11550) 00:24:03.173 [2024-11-06 15:36:21.102853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:03.173 [2024-11-06 15:36:21.102876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73100, cid 0, qid 0 00:24:03.173 [2024-11-06 15:36:21.110763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.173 [2024-11-06 15:36:21.110773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.173 [2024-11-06 15:36:21.110777] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.110781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73100) on tqpair=0xc11550 00:24:03.173 [2024-11-06 15:36:21.110794] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:03.173 [2024-11-06 15:36:21.110802] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:03.173 [2024-11-06 15:36:21.110807] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:03.173 [2024-11-06 15:36:21.110822] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.110826] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.110830] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc11550) 00:24:03.173 [2024-11-06 15:36:21.110839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.173 [2024-11-06 15:36:21.110856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73100, cid 0, qid 0 00:24:03.173 [2024-11-06 15:36:21.111074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.173 [2024-11-06 15:36:21.111080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.173 [2024-11-06 15:36:21.111084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.111088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73100) on tqpair=0xc11550 00:24:03.173 [2024-11-06 15:36:21.111093] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:03.173 [2024-11-06 15:36:21.111101] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:03.173 [2024-11-06 15:36:21.111108] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.111112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.111116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc11550) 00:24:03.173 [2024-11-06 15:36:21.111123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.173 [2024-11-06 15:36:21.111133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73100, cid 0, qid 0 00:24:03.173 [2024-11-06 15:36:21.111349] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.173 [2024-11-06 15:36:21.111355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.173 [2024-11-06 15:36:21.111359] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.111362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73100) on tqpair=0xc11550 00:24:03.173 [2024-11-06 15:36:21.111368] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:03.173 [2024-11-06 15:36:21.111377] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:03.173 [2024-11-06 15:36:21.111384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.111388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.111392] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc11550) 00:24:03.173 [2024-11-06 15:36:21.111399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.173 [2024-11-06 15:36:21.111412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73100, cid 0, qid 0 00:24:03.173 [2024-11-06 15:36:21.111614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.173 [2024-11-06 15:36:21.111621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.173 [2024-11-06 15:36:21.111624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.111628] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73100) on tqpair=0xc11550 00:24:03.173 [2024-11-06 15:36:21.111633] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:03.173 [2024-11-06 15:36:21.111643] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.111647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.111651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc11550) 00:24:03.173 [2024-11-06 15:36:21.111658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.173 [2024-11-06 15:36:21.111668] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73100, cid 0, qid 0 00:24:03.173 [2024-11-06 15:36:21.111859] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.173 [2024-11-06 15:36:21.111865] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.173 [2024-11-06 15:36:21.111869] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.111873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73100) on tqpair=0xc11550 00:24:03.173 [2024-11-06 15:36:21.111877] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:03.173 [2024-11-06 15:36:21.111883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:03.173 [2024-11-06 15:36:21.111891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:03.173 [2024-11-06 15:36:21.111999] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:03.173 [2024-11-06 15:36:21.112004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:03.173 [2024-11-06 15:36:21.112012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.112016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.112020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc11550) 00:24:03.173 [2024-11-06 15:36:21.112026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.173 [2024-11-06 15:36:21.112037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73100, cid 0, qid 0 00:24:03.173 [2024-11-06 15:36:21.115754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.173 [2024-11-06 15:36:21.115762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.173 [2024-11-06 15:36:21.115765] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.115769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73100) on tqpair=0xc11550 00:24:03.173 [2024-11-06 15:36:21.115774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:03.173 [2024-11-06 15:36:21.115784] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.115788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.115805] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc11550) 00:24:03.173 [2024-11-06 15:36:21.115816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.173 [2024-11-06 15:36:21.115829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73100, cid 0, qid 0 00:24:03.173 [2024-11-06 15:36:21.116035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.173 [2024-11-06 15:36:21.116041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.173 [2024-11-06 15:36:21.116044] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.116048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73100) on tqpair=0xc11550 00:24:03.173 [2024-11-06 15:36:21.116053] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:03.173 [2024-11-06 15:36:21.116058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:03.173 [2024-11-06 15:36:21.116067] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:03.173 [2024-11-06 15:36:21.116083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:03.173 [2024-11-06 15:36:21.116093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.116096] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc11550) 00:24:03.173 [2024-11-06 15:36:21.116104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.173 [2024-11-06 15:36:21.116114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73100, cid 0, qid 0 00:24:03.173 [2024-11-06 15:36:21.116374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.173 [2024-11-06 15:36:21.116381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.173 [2024-11-06 15:36:21.116385] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.116389] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc11550): datao=0, datal=4096, cccid=0 00:24:03.173 [2024-11-06 15:36:21.116394] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc73100) on tqpair(0xc11550): expected_datao=0, payload_size=4096 00:24:03.173 [2024-11-06 15:36:21.116399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.116412] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.173 [2024-11-06 15:36:21.116416] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.437 [2024-11-06 15:36:21.157757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.437 [2024-11-06 15:36:21.157769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.437 [2024-11-06 15:36:21.157773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.437 [2024-11-06 15:36:21.157777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73100) on tqpair=0xc11550 00:24:03.437 [2024-11-06 15:36:21.157786] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:03.437 [2024-11-06 15:36:21.157792] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:03.437 [2024-11-06 15:36:21.157796] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:03.437 [2024-11-06 15:36:21.157804] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:03.437 [2024-11-06 15:36:21.157809] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:03.437 [2024-11-06 15:36:21.157814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:03.437 [2024-11-06 15:36:21.157825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:03.437 [2024-11-06 15:36:21.157837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.437 [2024-11-06 15:36:21.157842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.437 [2024-11-06 15:36:21.157845] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc11550) 00:24:03.437 [2024-11-06 15:36:21.157853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:03.437 [2024-11-06 15:36:21.157866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73100, cid 0, qid 0 00:24:03.437 [2024-11-06 15:36:21.158072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.437 [2024-11-06 15:36:21.158079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.437 [2024-11-06 15:36:21.158082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.437 [2024-11-06 15:36:21.158086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73100) on tqpair=0xc11550 00:24:03.437 [2024-11-06 15:36:21.158093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.437 [2024-11-06 15:36:21.158097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.437 [2024-11-06 15:36:21.158101] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc11550) 00:24:03.437 [2024-11-06 15:36:21.158107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.437 [2024-11-06 15:36:21.158113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.437 [2024-11-06 15:36:21.158117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.437 [2024-11-06 15:36:21.158121] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc11550) 00:24:03.437 [2024-11-06 15:36:21.158126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.437 [2024-11-06 15:36:21.158133] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.158136] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.158140] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc11550) 00:24:03.438 [2024-11-06 15:36:21.158146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.438 [2024-11-06 15:36:21.158152] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.158156] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.158159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11550) 00:24:03.438 [2024-11-06 15:36:21.158165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.438 [2024-11-06 15:36:21.158170] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:03.438 [2024-11-06 15:36:21.158178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:03.438 [2024-11-06 15:36:21.158185] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.158188] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc11550) 00:24:03.438 [2024-11-06 15:36:21.158195] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.438 [2024-11-06 15:36:21.158208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73100, cid 0, qid 0 00:24:03.438 [2024-11-06 15:36:21.158214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73280, cid 1, qid 0 00:24:03.438 [2024-11-06 15:36:21.158219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73400, cid 2, qid 0 00:24:03.438 [2024-11-06 15:36:21.158226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73580, cid 3, qid 0 00:24:03.438 [2024-11-06 15:36:21.158231] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73700, cid 4, qid 0 00:24:03.438 [2024-11-06 15:36:21.158489] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.438 [2024-11-06 15:36:21.158495] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.438 [2024-11-06 15:36:21.158499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.158503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73700) on tqpair=0xc11550 00:24:03.438 [2024-11-06 15:36:21.158510] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:03.438 [2024-11-06 15:36:21.158515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:03.438 [2024-11-06 15:36:21.158525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:03.438 [2024-11-06 15:36:21.158532] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:03.438 [2024-11-06 15:36:21.158539] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.158542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.158546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc11550) 00:24:03.438 [2024-11-06 15:36:21.158553] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:03.438 [2024-11-06 15:36:21.158563] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73700, cid 4, qid 0 00:24:03.438 [2024-11-06 15:36:21.158780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.438 [2024-11-06 15:36:21.158786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.438 [2024-11-06 15:36:21.158790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.158794] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73700) on tqpair=0xc11550 00:24:03.438 [2024-11-06 15:36:21.158863] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:03.438 [2024-11-06 15:36:21.158873] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:03.438 [2024-11-06 15:36:21.158881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.158884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc11550) 00:24:03.438 [2024-11-06 15:36:21.158891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.438 [2024-11-06 15:36:21.158901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73700, cid 4, qid 0 00:24:03.438 [2024-11-06 15:36:21.159119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.438 [2024-11-06 15:36:21.159126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.438 [2024-11-06 15:36:21.159130] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.159134] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc11550): datao=0, datal=4096, cccid=4 00:24:03.438 [2024-11-06 15:36:21.159138] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc73700) on tqpair(0xc11550): expected_datao=0, payload_size=4096 00:24:03.438 [2024-11-06 15:36:21.159143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.159150] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.159154] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.159340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.438 [2024-11-06 15:36:21.159347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.438 [2024-11-06 15:36:21.159350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.159354] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73700) on tqpair=0xc11550 00:24:03.438 [2024-11-06 15:36:21.159365] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:03.438 [2024-11-06 15:36:21.159376] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:03.438 [2024-11-06 15:36:21.159386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:03.438 [2024-11-06 15:36:21.159393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.159396] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc11550) 00:24:03.438 [2024-11-06 15:36:21.159403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.438 [2024-11-06 15:36:21.159414] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73700, cid 4, qid 0 00:24:03.438 [2024-11-06 15:36:21.159637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.438 [2024-11-06 15:36:21.159644] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.438 [2024-11-06 15:36:21.159647] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.159651] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc11550): datao=0, datal=4096, cccid=4 00:24:03.438 [2024-11-06 15:36:21.159655] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc73700) on tqpair(0xc11550): expected_datao=0, payload_size=4096 00:24:03.438 [2024-11-06 15:36:21.159660] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.159666] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.159670] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.159863] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.438 [2024-11-06 15:36:21.159870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.438 [2024-11-06 15:36:21.159873] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.159877] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73700) on tqpair=0xc11550 00:24:03.438 [2024-11-06 15:36:21.159891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:03.438 [2024-11-06 15:36:21.159902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:03.438 [2024-11-06 15:36:21.159909] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.159913] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc11550) 00:24:03.438 [2024-11-06 15:36:21.159919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.438 [2024-11-06 15:36:21.159930] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73700, cid 4, qid 0 00:24:03.438 [2024-11-06 15:36:21.160163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.438 [2024-11-06 15:36:21.160169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.438 [2024-11-06 15:36:21.160173] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.160176] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc11550): datao=0, datal=4096, cccid=4 00:24:03.438 [2024-11-06 15:36:21.160181] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc73700) on tqpair(0xc11550): expected_datao=0, payload_size=4096 00:24:03.438 [2024-11-06 15:36:21.160188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.160194] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.160198] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.160381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.438 [2024-11-06 15:36:21.160387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.438 [2024-11-06 15:36:21.160391] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.438 [2024-11-06 15:36:21.160395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73700) on tqpair=0xc11550 00:24:03.438 [2024-11-06 15:36:21.160403] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:03.438 [2024-11-06 15:36:21.160412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:03.438 [2024-11-06 15:36:21.160421] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:03.438 [2024-11-06 15:36:21.160428] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:03.438 [2024-11-06 15:36:21.160434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:03.438 [2024-11-06 15:36:21.160440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:03.438 [2024-11-06 15:36:21.160445] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:03.438 [2024-11-06 15:36:21.160450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:03.439 [2024-11-06 15:36:21.160456] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:03.439 [2024-11-06 15:36:21.160473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.160477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc11550) 00:24:03.439 [2024-11-06 15:36:21.160484] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.439 [2024-11-06 15:36:21.160491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.160494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.160498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc11550) 00:24:03.439 [2024-11-06 15:36:21.160504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.439 [2024-11-06 15:36:21.160518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73700, cid 4, qid 0 00:24:03.439 [2024-11-06 15:36:21.160523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73880, cid 5, qid 0 00:24:03.439 [2024-11-06 15:36:21.160742] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.439 [2024-11-06 15:36:21.160753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.439 [2024-11-06 15:36:21.160756] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.160760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73700) on tqpair=0xc11550 00:24:03.439 [2024-11-06 15:36:21.160767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.439 [2024-11-06 15:36:21.160773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.439 [2024-11-06 15:36:21.160776] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.160780] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73880) on tqpair=0xc11550 00:24:03.439 [2024-11-06 15:36:21.160792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.160796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc11550) 00:24:03.439 [2024-11-06 15:36:21.160802] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.439 [2024-11-06 15:36:21.160813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73880, cid 5, qid 0 00:24:03.439 [2024-11-06 15:36:21.161014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.439 [2024-11-06 15:36:21.161020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.439 [2024-11-06 15:36:21.161023] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.161027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73880) on tqpair=0xc11550 00:24:03.439 [2024-11-06 15:36:21.161036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.161040] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc11550) 00:24:03.439 [2024-11-06 15:36:21.161047] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.439 [2024-11-06 15:36:21.161056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73880, cid 5, qid 0 00:24:03.439 [2024-11-06 15:36:21.161245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.439 [2024-11-06 15:36:21.161252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.439 [2024-11-06 15:36:21.161255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.161259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73880) on tqpair=0xc11550 00:24:03.439 [2024-11-06 15:36:21.161268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.161272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc11550) 00:24:03.439 [2024-11-06 15:36:21.161279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.439 [2024-11-06 15:36:21.161288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73880, cid 5, qid 0 00:24:03.439 [2024-11-06 15:36:21.161502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.439 [2024-11-06 15:36:21.161509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.439 [2024-11-06 15:36:21.161512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.161516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73880) on tqpair=0xc11550 00:24:03.439 [2024-11-06 15:36:21.161531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.161535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc11550) 00:24:03.439 [2024-11-06 15:36:21.161542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.439 [2024-11-06 15:36:21.161549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.161553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc11550) 00:24:03.439 [2024-11-06 15:36:21.161559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.439 [2024-11-06 15:36:21.161567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.161570] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xc11550) 00:24:03.439 [2024-11-06 15:36:21.161577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.439 [2024-11-06 15:36:21.161589] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.161593] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc11550) 00:24:03.439 [2024-11-06 15:36:21.161599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.439 [2024-11-06 15:36:21.161611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73880, cid 5, qid 0 00:24:03.439 [2024-11-06 15:36:21.161616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73700, cid 4, qid 0 00:24:03.439 [2024-11-06 15:36:21.161621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73a00, cid 6, qid 0 00:24:03.439 [2024-11-06 15:36:21.161626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73b80, cid 7, qid 0 00:24:03.439 [2024-11-06 15:36:21.165763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.439 [2024-11-06 15:36:21.165771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.439 [2024-11-06 15:36:21.165774] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.165778] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc11550): datao=0, datal=8192, cccid=5 00:24:03.439 [2024-11-06 15:36:21.165782] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc73880) on tqpair(0xc11550): expected_datao=0, payload_size=8192 00:24:03.439 [2024-11-06 15:36:21.165787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.165794] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.165798] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.165804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.439 [2024-11-06 15:36:21.165809] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.439 [2024-11-06 15:36:21.165813] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.165816] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc11550): datao=0, datal=512, cccid=4 00:24:03.439 [2024-11-06 15:36:21.165821] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc73700) on tqpair(0xc11550): expected_datao=0, payload_size=512 00:24:03.439 [2024-11-06 15:36:21.165825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.165832] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.165835] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.165841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.439 [2024-11-06 15:36:21.165847] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.439 [2024-11-06 15:36:21.165850] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.165854] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc11550): datao=0, datal=512, cccid=6 00:24:03.439 [2024-11-06 15:36:21.165858] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc73a00) on tqpair(0xc11550): expected_datao=0, payload_size=512 00:24:03.439 [2024-11-06 15:36:21.165863] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.165869] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.165873] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.165878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.439 [2024-11-06 15:36:21.165884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.439 [2024-11-06 15:36:21.165887] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.165891] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc11550): datao=0, datal=4096, cccid=7 00:24:03.439 [2024-11-06 15:36:21.165895] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc73b80) on tqpair(0xc11550): expected_datao=0, payload_size=4096 00:24:03.439 [2024-11-06 15:36:21.165900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.165909] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.165913] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.165918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.439 [2024-11-06 15:36:21.165924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.439 [2024-11-06 15:36:21.165928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.165932] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73880) on tqpair=0xc11550 00:24:03.439 [2024-11-06 15:36:21.165944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.439 [2024-11-06 15:36:21.165950] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.439 [2024-11-06 15:36:21.165954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.165958] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73700) on tqpair=0xc11550 00:24:03.439 [2024-11-06 15:36:21.165968] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.439 [2024-11-06 15:36:21.165974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.439 [2024-11-06 15:36:21.165978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.165982] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73a00) on tqpair=0xc11550 00:24:03.439 [2024-11-06 15:36:21.165989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.439 [2024-11-06 15:36:21.165994] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.439 [2024-11-06 15:36:21.165998] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.439 [2024-11-06 15:36:21.166002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73b80) on tqpair=0xc11550 00:24:03.439 ===================================================== 00:24:03.440 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:03.440 ===================================================== 00:24:03.440 Controller Capabilities/Features 00:24:03.440 ================================ 00:24:03.440 Vendor ID: 8086 00:24:03.440 Subsystem Vendor ID: 8086 00:24:03.440 Serial Number: SPDK00000000000001 00:24:03.440 Model Number: SPDK bdev Controller 00:24:03.440 Firmware Version: 25.01 00:24:03.440 Recommended Arb Burst: 6 00:24:03.440 IEEE OUI Identifier: e4 d2 5c 00:24:03.440 Multi-path I/O 00:24:03.440 May have multiple subsystem ports: Yes 00:24:03.440 May have multiple controllers: Yes 00:24:03.440 Associated with SR-IOV VF: No 00:24:03.440 Max Data Transfer Size: 131072 00:24:03.440 Max Number of Namespaces: 32 00:24:03.440 Max Number of I/O Queues: 127 00:24:03.440 NVMe Specification Version (VS): 1.3 00:24:03.440 NVMe Specification Version (Identify): 1.3 00:24:03.440 Maximum Queue Entries: 128 00:24:03.440 Contiguous Queues Required: Yes 00:24:03.440 Arbitration Mechanisms Supported 00:24:03.440 Weighted Round Robin: Not Supported 00:24:03.440 Vendor Specific: Not Supported 00:24:03.440 Reset Timeout: 15000 ms 00:24:03.440 Doorbell Stride: 4 bytes 00:24:03.440 NVM Subsystem Reset: Not Supported 00:24:03.440 Command Sets Supported 00:24:03.440 NVM Command Set: Supported 00:24:03.440 Boot Partition: Not Supported 00:24:03.440 Memory Page Size Minimum: 4096 bytes 00:24:03.440 Memory Page Size Maximum: 4096 bytes 00:24:03.440 Persistent Memory Region: Not Supported 00:24:03.440 Optional Asynchronous Events Supported 00:24:03.440 Namespace Attribute Notices: Supported 00:24:03.440 Firmware Activation Notices: Not Supported 00:24:03.440 ANA Change Notices: Not Supported 00:24:03.440 PLE Aggregate Log Change Notices: Not Supported 00:24:03.440 LBA Status Info Alert Notices: Not Supported 00:24:03.440 EGE Aggregate Log Change Notices: Not Supported 00:24:03.440 Normal NVM Subsystem Shutdown event: Not Supported 00:24:03.440 Zone Descriptor Change Notices: Not Supported 00:24:03.440 Discovery Log Change Notices: Not Supported 00:24:03.440 Controller Attributes 00:24:03.440 128-bit Host Identifier: Supported 00:24:03.440 Non-Operational Permissive Mode: Not Supported 00:24:03.440 NVM Sets: Not Supported 00:24:03.440 Read Recovery Levels: Not Supported 00:24:03.440 Endurance Groups: Not Supported 00:24:03.440 Predictable Latency Mode: Not Supported 00:24:03.440 Traffic Based Keep ALive: Not Supported 00:24:03.440 Namespace Granularity: Not Supported 00:24:03.440 SQ Associations: Not Supported 00:24:03.440 UUID List: Not Supported 00:24:03.440 Multi-Domain Subsystem: Not Supported 00:24:03.440 Fixed Capacity Management: Not Supported 00:24:03.440 Variable Capacity Management: Not Supported 00:24:03.440 Delete Endurance Group: Not Supported 00:24:03.440 Delete NVM Set: Not Supported 00:24:03.440 Extended LBA Formats Supported: Not Supported 00:24:03.440 Flexible Data Placement Supported: Not Supported 00:24:03.440 00:24:03.440 Controller Memory Buffer Support 00:24:03.440 ================================ 00:24:03.440 Supported: No 00:24:03.440 00:24:03.440 Persistent Memory Region Support 00:24:03.440 ================================ 00:24:03.440 Supported: No 00:24:03.440 00:24:03.440 Admin Command Set Attributes 00:24:03.440 ============================ 00:24:03.440 Security Send/Receive: Not Supported 00:24:03.440 Format NVM: Not Supported 00:24:03.440 Firmware Activate/Download: Not Supported 00:24:03.440 Namespace Management: Not Supported 00:24:03.440 Device Self-Test: Not Supported 00:24:03.440 Directives: Not Supported 00:24:03.440 NVMe-MI: Not Supported 00:24:03.440 Virtualization Management: Not Supported 00:24:03.440 Doorbell Buffer Config: Not Supported 00:24:03.440 Get LBA Status Capability: Not Supported 00:24:03.440 Command & Feature Lockdown Capability: Not Supported 00:24:03.440 Abort Command Limit: 4 00:24:03.440 Async Event Request Limit: 4 00:24:03.440 Number of Firmware Slots: N/A 00:24:03.440 Firmware Slot 1 Read-Only: N/A 00:24:03.440 Firmware Activation Without Reset: N/A 00:24:03.440 Multiple Update Detection Support: N/A 00:24:03.440 Firmware Update Granularity: No Information Provided 00:24:03.440 Per-Namespace SMART Log: No 00:24:03.440 Asymmetric Namespace Access Log Page: Not Supported 00:24:03.440 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:03.440 Command Effects Log Page: Supported 00:24:03.440 Get Log Page Extended Data: Supported 00:24:03.440 Telemetry Log Pages: Not Supported 00:24:03.440 Persistent Event Log Pages: Not Supported 00:24:03.440 Supported Log Pages Log Page: May Support 00:24:03.440 Commands Supported & Effects Log Page: Not Supported 00:24:03.440 Feature Identifiers & Effects Log Page:May Support 00:24:03.440 NVMe-MI Commands & Effects Log Page: May Support 00:24:03.440 Data Area 4 for Telemetry Log: Not Supported 00:24:03.440 Error Log Page Entries Supported: 128 00:24:03.440 Keep Alive: Supported 00:24:03.440 Keep Alive Granularity: 10000 ms 00:24:03.440 00:24:03.440 NVM Command Set Attributes 00:24:03.440 ========================== 00:24:03.440 Submission Queue Entry Size 00:24:03.440 Max: 64 00:24:03.440 Min: 64 00:24:03.440 Completion Queue Entry Size 00:24:03.440 Max: 16 00:24:03.440 Min: 16 00:24:03.440 Number of Namespaces: 32 00:24:03.440 Compare Command: Supported 00:24:03.440 Write Uncorrectable Command: Not Supported 00:24:03.440 Dataset Management Command: Supported 00:24:03.440 Write Zeroes Command: Supported 00:24:03.440 Set Features Save Field: Not Supported 00:24:03.440 Reservations: Supported 00:24:03.440 Timestamp: Not Supported 00:24:03.440 Copy: Supported 00:24:03.440 Volatile Write Cache: Present 00:24:03.440 Atomic Write Unit (Normal): 1 00:24:03.440 Atomic Write Unit (PFail): 1 00:24:03.440 Atomic Compare & Write Unit: 1 00:24:03.440 Fused Compare & Write: Supported 00:24:03.440 Scatter-Gather List 00:24:03.440 SGL Command Set: Supported 00:24:03.440 SGL Keyed: Supported 00:24:03.440 SGL Bit Bucket Descriptor: Not Supported 00:24:03.440 SGL Metadata Pointer: Not Supported 00:24:03.440 Oversized SGL: Not Supported 00:24:03.440 SGL Metadata Address: Not Supported 00:24:03.440 SGL Offset: Supported 00:24:03.440 Transport SGL Data Block: Not Supported 00:24:03.440 Replay Protected Memory Block: Not Supported 00:24:03.440 00:24:03.440 Firmware Slot Information 00:24:03.440 ========================= 00:24:03.440 Active slot: 1 00:24:03.440 Slot 1 Firmware Revision: 25.01 00:24:03.440 00:24:03.440 00:24:03.440 Commands Supported and Effects 00:24:03.440 ============================== 00:24:03.440 Admin Commands 00:24:03.440 -------------- 00:24:03.440 Get Log Page (02h): Supported 00:24:03.440 Identify (06h): Supported 00:24:03.440 Abort (08h): Supported 00:24:03.440 Set Features (09h): Supported 00:24:03.440 Get Features (0Ah): Supported 00:24:03.440 Asynchronous Event Request (0Ch): Supported 00:24:03.440 Keep Alive (18h): Supported 00:24:03.440 I/O Commands 00:24:03.440 ------------ 00:24:03.440 Flush (00h): Supported LBA-Change 00:24:03.440 Write (01h): Supported LBA-Change 00:24:03.440 Read (02h): Supported 00:24:03.440 Compare (05h): Supported 00:24:03.440 Write Zeroes (08h): Supported LBA-Change 00:24:03.440 Dataset Management (09h): Supported LBA-Change 00:24:03.440 Copy (19h): Supported LBA-Change 00:24:03.440 00:24:03.440 Error Log 00:24:03.440 ========= 00:24:03.440 00:24:03.440 Arbitration 00:24:03.440 =========== 00:24:03.440 Arbitration Burst: 1 00:24:03.440 00:24:03.440 Power Management 00:24:03.440 ================ 00:24:03.440 Number of Power States: 1 00:24:03.440 Current Power State: Power State #0 00:24:03.440 Power State #0: 00:24:03.440 Max Power: 0.00 W 00:24:03.440 Non-Operational State: Operational 00:24:03.440 Entry Latency: Not Reported 00:24:03.440 Exit Latency: Not Reported 00:24:03.440 Relative Read Throughput: 0 00:24:03.440 Relative Read Latency: 0 00:24:03.440 Relative Write Throughput: 0 00:24:03.440 Relative Write Latency: 0 00:24:03.440 Idle Power: Not Reported 00:24:03.440 Active Power: Not Reported 00:24:03.440 Non-Operational Permissive Mode: Not Supported 00:24:03.440 00:24:03.440 Health Information 00:24:03.440 ================== 00:24:03.440 Critical Warnings: 00:24:03.440 Available Spare Space: OK 00:24:03.440 Temperature: OK 00:24:03.440 Device Reliability: OK 00:24:03.440 Read Only: No 00:24:03.440 Volatile Memory Backup: OK 00:24:03.440 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:03.440 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:03.440 Available Spare: 0% 00:24:03.440 Available Spare Threshold: 0% 00:24:03.440 Life Percentage Used:[2024-11-06 15:36:21.166103] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.166109] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc11550) 00:24:03.441 [2024-11-06 15:36:21.166116] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.441 [2024-11-06 15:36:21.166130] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73b80, cid 7, qid 0 00:24:03.441 [2024-11-06 15:36:21.166354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.441 [2024-11-06 15:36:21.166361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.441 [2024-11-06 15:36:21.166364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.166368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73b80) on tqpair=0xc11550 00:24:03.441 [2024-11-06 15:36:21.166402] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:03.441 [2024-11-06 15:36:21.166412] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73100) on tqpair=0xc11550 00:24:03.441 [2024-11-06 15:36:21.166419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.441 [2024-11-06 15:36:21.166425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73280) on tqpair=0xc11550 00:24:03.441 [2024-11-06 15:36:21.166429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.441 [2024-11-06 15:36:21.166434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73400) on tqpair=0xc11550 00:24:03.441 [2024-11-06 15:36:21.166439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.441 [2024-11-06 15:36:21.166444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73580) on tqpair=0xc11550 00:24:03.441 [2024-11-06 15:36:21.166449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.441 [2024-11-06 15:36:21.166457] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.166466] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.166469] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11550) 00:24:03.441 [2024-11-06 15:36:21.166477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.441 [2024-11-06 15:36:21.166488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73580, cid 3, qid 0 00:24:03.441 [2024-11-06 15:36:21.166689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.441 [2024-11-06 15:36:21.166696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.441 [2024-11-06 15:36:21.166699] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.166703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73580) on tqpair=0xc11550 00:24:03.441 [2024-11-06 15:36:21.166710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.166714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.166717] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11550) 00:24:03.441 [2024-11-06 15:36:21.166724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.441 [2024-11-06 15:36:21.166737] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73580, cid 3, qid 0 00:24:03.441 [2024-11-06 15:36:21.166949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.441 [2024-11-06 15:36:21.166956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.441 [2024-11-06 15:36:21.166959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.166963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73580) on tqpair=0xc11550 00:24:03.441 [2024-11-06 15:36:21.166968] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:03.441 [2024-11-06 15:36:21.166973] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:03.441 [2024-11-06 15:36:21.166983] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.166987] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.166990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11550) 00:24:03.441 [2024-11-06 15:36:21.166997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.441 [2024-11-06 15:36:21.167008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73580, cid 3, qid 0 00:24:03.441 [2024-11-06 15:36:21.167205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.441 [2024-11-06 15:36:21.167211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.441 [2024-11-06 15:36:21.167215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.167219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73580) on tqpair=0xc11550 00:24:03.441 [2024-11-06 15:36:21.167229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.167233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.167236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11550) 00:24:03.441 [2024-11-06 15:36:21.167243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.441 [2024-11-06 15:36:21.167253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73580, cid 3, qid 0 00:24:03.441 [2024-11-06 15:36:21.167423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.441 [2024-11-06 15:36:21.167429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.441 [2024-11-06 15:36:21.167433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.167439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73580) on tqpair=0xc11550 00:24:03.441 [2024-11-06 15:36:21.167449] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.167453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.167457] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11550) 00:24:03.441 [2024-11-06 15:36:21.167464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.441 [2024-11-06 15:36:21.167474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73580, cid 3, qid 0 00:24:03.441 [2024-11-06 15:36:21.167677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.441 [2024-11-06 15:36:21.167683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.441 [2024-11-06 15:36:21.167686] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.167690] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73580) on tqpair=0xc11550 00:24:03.441 [2024-11-06 15:36:21.167701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.167704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.167708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11550) 00:24:03.441 [2024-11-06 15:36:21.167715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.441 [2024-11-06 15:36:21.167725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73580, cid 3, qid 0 00:24:03.441 [2024-11-06 15:36:21.167949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.441 [2024-11-06 15:36:21.167956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.441 [2024-11-06 15:36:21.167959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.167963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73580) on tqpair=0xc11550 00:24:03.441 [2024-11-06 15:36:21.167973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.167977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.167981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11550) 00:24:03.441 [2024-11-06 15:36:21.167987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.441 [2024-11-06 15:36:21.167997] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73580, cid 3, qid 0 00:24:03.441 [2024-11-06 15:36:21.168175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.441 [2024-11-06 15:36:21.168181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.441 [2024-11-06 15:36:21.168184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.168188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73580) on tqpair=0xc11550 00:24:03.441 [2024-11-06 15:36:21.168198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.168202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.168206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11550) 00:24:03.441 [2024-11-06 15:36:21.168213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.441 [2024-11-06 15:36:21.168222] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73580, cid 3, qid 0 00:24:03.441 [2024-11-06 15:36:21.168430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.441 [2024-11-06 15:36:21.168437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.441 [2024-11-06 15:36:21.168440] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.168444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73580) on tqpair=0xc11550 00:24:03.441 [2024-11-06 15:36:21.168456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.168460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.168464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11550) 00:24:03.441 [2024-11-06 15:36:21.168471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.441 [2024-11-06 15:36:21.168481] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73580, cid 3, qid 0 00:24:03.441 [2024-11-06 15:36:21.168666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.441 [2024-11-06 15:36:21.168672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.441 [2024-11-06 15:36:21.168676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.168679] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73580) on tqpair=0xc11550 00:24:03.441 [2024-11-06 15:36:21.168689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.168693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.441 [2024-11-06 15:36:21.168697] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11550) 00:24:03.441 [2024-11-06 15:36:21.168703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.441 [2024-11-06 15:36:21.168713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73580, cid 3, qid 0 00:24:03.441 [2024-11-06 15:36:21.168911] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.441 [2024-11-06 15:36:21.168917] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.441 [2024-11-06 15:36:21.168921] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.442 [2024-11-06 15:36:21.168925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73580) on tqpair=0xc11550 00:24:03.442 [2024-11-06 15:36:21.168935] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.442 [2024-11-06 15:36:21.168938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.442 [2024-11-06 15:36:21.168942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11550) 00:24:03.442 [2024-11-06 15:36:21.168949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.442 [2024-11-06 15:36:21.168959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73580, cid 3, qid 0 00:24:03.442 [2024-11-06 15:36:21.169173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.442 [2024-11-06 15:36:21.169179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.442 [2024-11-06 15:36:21.169183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.442 [2024-11-06 15:36:21.169187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73580) on tqpair=0xc11550 00:24:03.442 [2024-11-06 15:36:21.169196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.442 [2024-11-06 15:36:21.169200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.442 [2024-11-06 15:36:21.169204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11550) 00:24:03.442 [2024-11-06 15:36:21.169211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.442 [2024-11-06 15:36:21.169221] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73580, cid 3, qid 0 00:24:03.442 [2024-11-06 15:36:21.169425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.442 [2024-11-06 15:36:21.169431] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.442 [2024-11-06 15:36:21.169435] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.442 [2024-11-06 15:36:21.169439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73580) on tqpair=0xc11550 00:24:03.442 [2024-11-06 15:36:21.169448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.442 [2024-11-06 15:36:21.169452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.442 [2024-11-06 15:36:21.169458] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11550) 00:24:03.442 [2024-11-06 15:36:21.169465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.442 [2024-11-06 15:36:21.169475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73580, cid 3, qid 0 00:24:03.442 [2024-11-06 15:36:21.169648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.442 [2024-11-06 15:36:21.169654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.442 [2024-11-06 15:36:21.169658] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.442 [2024-11-06 15:36:21.169661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73580) on tqpair=0xc11550 00:24:03.442 [2024-11-06 15:36:21.169671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.442 [2024-11-06 15:36:21.169675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.442 [2024-11-06 15:36:21.169679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11550) 00:24:03.442 [2024-11-06 15:36:21.169685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.442 [2024-11-06 15:36:21.169695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc73580, cid 3, qid 0 00:24:03.442 [2024-11-06 15:36:21.173755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.442 [2024-11-06 15:36:21.173764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.442 [2024-11-06 15:36:21.173767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.442 [2024-11-06 15:36:21.173771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc73580) on tqpair=0xc11550 00:24:03.442 [2024-11-06 15:36:21.173780] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:24:03.442 0% 00:24:03.442 Data Units Read: 0 00:24:03.442 Data Units Written: 0 00:24:03.442 Host Read Commands: 0 00:24:03.442 Host Write Commands: 0 00:24:03.442 Controller Busy Time: 0 minutes 00:24:03.442 Power Cycles: 0 00:24:03.442 Power On Hours: 0 hours 00:24:03.442 Unsafe Shutdowns: 0 00:24:03.442 Unrecoverable Media Errors: 0 00:24:03.442 Lifetime Error Log Entries: 0 00:24:03.442 Warning Temperature Time: 0 minutes 00:24:03.442 Critical Temperature Time: 0 minutes 00:24:03.442 00:24:03.442 Number of Queues 00:24:03.442 ================ 00:24:03.442 Number of I/O Submission Queues: 127 00:24:03.442 Number of I/O Completion Queues: 127 00:24:03.442 00:24:03.442 Active Namespaces 00:24:03.442 ================= 00:24:03.442 Namespace ID:1 00:24:03.442 Error Recovery Timeout: Unlimited 00:24:03.442 Command Set Identifier: NVM (00h) 00:24:03.442 Deallocate: Supported 00:24:03.442 Deallocated/Unwritten Error: Not Supported 00:24:03.442 Deallocated Read Value: Unknown 00:24:03.442 Deallocate in Write Zeroes: Not Supported 00:24:03.442 Deallocated Guard Field: 0xFFFF 00:24:03.442 Flush: Supported 00:24:03.442 Reservation: Supported 00:24:03.442 Namespace Sharing Capabilities: Multiple Controllers 00:24:03.442 Size (in LBAs): 131072 (0GiB) 00:24:03.442 Capacity (in LBAs): 131072 (0GiB) 00:24:03.442 Utilization (in LBAs): 131072 (0GiB) 00:24:03.442 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:03.442 EUI64: ABCDEF0123456789 00:24:03.442 UUID: 2279d5a8-fa91-4301-87ae-6cc3b613ea3f 00:24:03.442 Thin Provisioning: Not Supported 00:24:03.442 Per-NS Atomic Units: Yes 00:24:03.442 Atomic Boundary Size (Normal): 0 00:24:03.442 Atomic Boundary Size (PFail): 0 00:24:03.442 Atomic Boundary Offset: 0 00:24:03.442 Maximum Single Source Range Length: 65535 00:24:03.442 Maximum Copy Length: 65535 00:24:03.442 Maximum Source Range Count: 1 00:24:03.442 NGUID/EUI64 Never Reused: No 00:24:03.442 Namespace Write Protected: No 00:24:03.442 Number of LBA Formats: 1 00:24:03.442 Current LBA Format: LBA Format #00 00:24:03.442 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:03.442 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.442 rmmod nvme_tcp 00:24:03.442 rmmod nvme_fabrics 00:24:03.442 rmmod nvme_keyring 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3872122 ']' 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3872122 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 3872122 ']' 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 3872122 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3872122 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:03.442 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3872122' 00:24:03.443 killing process with pid 3872122 00:24:03.443 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 3872122 00:24:03.443 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 3872122 00:24:03.704 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:03.704 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:03.704 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:03.704 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:03.704 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:03.704 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:03.704 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:03.704 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:03.704 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:03.704 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.704 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.704 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:06.247 00:24:06.247 real 0m11.857s 00:24:06.247 user 0m8.969s 00:24:06.247 sys 0m6.270s 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:06.247 ************************************ 00:24:06.247 END TEST nvmf_identify 00:24:06.247 ************************************ 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.247 ************************************ 00:24:06.247 START TEST nvmf_perf 00:24:06.247 ************************************ 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:06.247 * Looking for test storage... 00:24:06.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:06.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.247 --rc genhtml_branch_coverage=1 00:24:06.247 --rc genhtml_function_coverage=1 00:24:06.247 --rc genhtml_legend=1 00:24:06.247 --rc geninfo_all_blocks=1 00:24:06.247 --rc geninfo_unexecuted_blocks=1 00:24:06.247 00:24:06.247 ' 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:06.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.247 --rc genhtml_branch_coverage=1 00:24:06.247 --rc genhtml_function_coverage=1 00:24:06.247 --rc genhtml_legend=1 00:24:06.247 --rc geninfo_all_blocks=1 00:24:06.247 --rc geninfo_unexecuted_blocks=1 00:24:06.247 00:24:06.247 ' 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:06.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.247 --rc genhtml_branch_coverage=1 00:24:06.247 --rc genhtml_function_coverage=1 00:24:06.247 --rc genhtml_legend=1 00:24:06.247 --rc geninfo_all_blocks=1 00:24:06.247 --rc geninfo_unexecuted_blocks=1 00:24:06.247 00:24:06.247 ' 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:06.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.247 --rc genhtml_branch_coverage=1 00:24:06.247 --rc genhtml_function_coverage=1 00:24:06.247 --rc genhtml_legend=1 00:24:06.247 --rc geninfo_all_blocks=1 00:24:06.247 --rc geninfo_unexecuted_blocks=1 00:24:06.247 00:24:06.247 ' 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.247 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:06.248 15:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:14.385 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:14.385 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:14.385 Found net devices under 0000:31:00.0: cvl_0_0 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:14.385 Found net devices under 0000:31:00.1: cvl_0_1 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.385 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:14.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:24:14.386 00:24:14.386 --- 10.0.0.2 ping statistics --- 00:24:14.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.386 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:24:14.386 00:24:14.386 --- 10.0.0.1 ping statistics --- 00:24:14.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.386 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3876822 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3876822 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 3876822 ']' 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:14.386 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:14.386 [2024-11-06 15:36:31.661497] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:24:14.386 [2024-11-06 15:36:31.661564] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.386 [2024-11-06 15:36:31.765992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:14.386 [2024-11-06 15:36:31.819193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.386 [2024-11-06 15:36:31.819249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.386 [2024-11-06 15:36:31.819257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.386 [2024-11-06 15:36:31.819264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.386 [2024-11-06 15:36:31.819270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.386 [2024-11-06 15:36:31.821372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.386 [2024-11-06 15:36:31.821534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.386 [2024-11-06 15:36:31.821695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.386 [2024-11-06 15:36:31.821694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:14.646 15:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:14.646 15:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:24:14.646 15:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:14.646 15:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:14.646 15:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:14.646 15:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.646 15:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:14.646 15:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:15.218 15:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:15.218 15:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:15.478 15:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:15.478 15:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:15.739 15:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:15.739 15:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:15.739 15:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:15.739 15:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:15.739 15:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:15.739 [2024-11-06 15:36:33.665187] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.739 15:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:16.000 15:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:16.000 15:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:16.260 15:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:16.260 15:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:16.521 15:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.521 [2024-11-06 15:36:34.436288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.521 15:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:16.782 15:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:16.782 15:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:16.782 15:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:16.782 15:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:18.167 Initializing NVMe Controllers 00:24:18.167 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:18.167 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:18.167 Initialization complete. Launching workers. 00:24:18.167 ======================================================== 00:24:18.167 Latency(us) 00:24:18.167 Device Information : IOPS MiB/s Average min max 00:24:18.167 PCIE (0000:65:00.0) NSID 1 from core 0: 76927.14 300.50 415.22 13.30 5387.53 00:24:18.167 ======================================================== 00:24:18.167 Total : 76927.14 300.50 415.22 13.30 5387.53 00:24:18.167 00:24:18.167 15:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:19.551 Initializing NVMe Controllers 00:24:19.551 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:19.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:19.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:19.551 Initialization complete. Launching workers. 00:24:19.551 ======================================================== 00:24:19.551 Latency(us) 00:24:19.551 Device Information : IOPS MiB/s Average min max 00:24:19.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 92.00 0.36 11006.38 114.53 45738.06 00:24:19.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 54.00 0.21 19047.22 7963.36 47890.96 00:24:19.551 ======================================================== 00:24:19.551 Total : 146.00 0.57 13980.39 114.53 47890.96 00:24:19.551 00:24:19.551 15:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:20.649 Initializing NVMe Controllers 00:24:20.649 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:20.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:20.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:20.649 Initialization complete. Launching workers. 00:24:20.649 ======================================================== 00:24:20.649 Latency(us) 00:24:20.649 Device Information : IOPS MiB/s Average min max 00:24:20.649 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11536.79 45.07 2783.72 438.18 6500.96 00:24:20.649 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3774.60 14.74 8515.68 6507.01 16155.84 00:24:20.649 ======================================================== 00:24:20.649 Total : 15311.39 59.81 4196.78 438.18 16155.84 00:24:20.649 00:24:20.649 15:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:20.649 15:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:20.649 15:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:23.190 Initializing NVMe Controllers 00:24:23.190 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:23.190 Controller IO queue size 128, less than required. 00:24:23.190 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.190 Controller IO queue size 128, less than required. 00:24:23.190 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:23.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:23.190 Initialization complete. Launching workers. 00:24:23.190 ======================================================== 00:24:23.190 Latency(us) 00:24:23.190 Device Information : IOPS MiB/s Average min max 00:24:23.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1817.49 454.37 71410.10 39709.07 119954.09 00:24:23.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 609.50 152.37 221985.72 60668.16 346359.96 00:24:23.190 ======================================================== 00:24:23.190 Total : 2426.99 606.75 109224.62 39709.07 346359.96 00:24:23.190 00:24:23.190 15:36:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:23.451 No valid NVMe controllers or AIO or URING devices found 00:24:23.451 Initializing NVMe Controllers 00:24:23.451 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:23.451 Controller IO queue size 128, less than required. 00:24:23.451 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.451 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:23.451 Controller IO queue size 128, less than required. 00:24:23.451 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.451 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:23.451 WARNING: Some requested NVMe devices were skipped 00:24:23.451 15:36:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:25.996 Initializing NVMe Controllers 00:24:25.996 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:25.996 Controller IO queue size 128, less than required. 00:24:25.996 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:25.996 Controller IO queue size 128, less than required. 00:24:25.996 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:25.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:25.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:25.996 Initialization complete. Launching workers. 00:24:25.996 00:24:25.996 ==================== 00:24:25.996 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:25.996 TCP transport: 00:24:25.996 polls: 36689 00:24:25.996 idle_polls: 23801 00:24:25.996 sock_completions: 12888 00:24:25.996 nvme_completions: 9077 00:24:25.996 submitted_requests: 13534 00:24:25.996 queued_requests: 1 00:24:25.996 00:24:25.996 ==================== 00:24:25.996 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:25.996 TCP transport: 00:24:25.996 polls: 37843 00:24:25.996 idle_polls: 23958 00:24:25.996 sock_completions: 13885 00:24:25.996 nvme_completions: 6705 00:24:25.996 submitted_requests: 10044 00:24:25.996 queued_requests: 1 00:24:25.996 ======================================================== 00:24:25.996 Latency(us) 00:24:25.996 Device Information : IOPS MiB/s Average min max 00:24:25.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2268.86 567.22 57271.88 34787.39 96502.88 00:24:25.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1675.90 418.97 77411.49 32029.64 135062.71 00:24:25.996 ======================================================== 00:24:25.996 Total : 3944.76 986.19 65828.02 32029.64 135062.71 00:24:25.996 00:24:25.996 15:36:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:25.996 15:36:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:26.257 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:26.257 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:26.257 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:26.257 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:26.257 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:26.257 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:26.257 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:26.257 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:26.257 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:26.257 rmmod nvme_tcp 00:24:26.257 rmmod nvme_fabrics 00:24:26.257 rmmod nvme_keyring 00:24:26.257 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:26.257 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:26.257 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:26.257 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3876822 ']' 00:24:26.258 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3876822 00:24:26.258 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 3876822 ']' 00:24:26.258 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 3876822 00:24:26.258 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:24:26.258 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:26.258 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3876822 00:24:26.518 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:26.518 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:26.518 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3876822' 00:24:26.518 killing process with pid 3876822 00:24:26.518 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 3876822 00:24:26.518 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 3876822 00:24:28.428 15:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:28.428 15:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:28.428 15:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:28.428 15:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:28.428 15:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:28.428 15:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:28.428 15:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:28.428 15:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.428 15:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:28.428 15:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.428 15:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.428 15:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.340 15:36:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:30.340 00:24:30.340 real 0m24.584s 00:24:30.340 user 0m58.759s 00:24:30.340 sys 0m8.869s 00:24:30.340 15:36:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:30.340 15:36:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:30.340 ************************************ 00:24:30.340 END TEST nvmf_perf 00:24:30.340 ************************************ 00:24:30.340 15:36:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:30.340 15:36:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:30.340 15:36:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:30.340 15:36:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.601 ************************************ 00:24:30.601 START TEST nvmf_fio_host 00:24:30.601 ************************************ 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:30.601 * Looking for test storage... 00:24:30.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:30.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.601 --rc genhtml_branch_coverage=1 00:24:30.601 --rc genhtml_function_coverage=1 00:24:30.601 --rc genhtml_legend=1 00:24:30.601 --rc geninfo_all_blocks=1 00:24:30.601 --rc geninfo_unexecuted_blocks=1 00:24:30.601 00:24:30.601 ' 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:30.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.601 --rc genhtml_branch_coverage=1 00:24:30.601 --rc genhtml_function_coverage=1 00:24:30.601 --rc genhtml_legend=1 00:24:30.601 --rc geninfo_all_blocks=1 00:24:30.601 --rc geninfo_unexecuted_blocks=1 00:24:30.601 00:24:30.601 ' 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:30.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.601 --rc genhtml_branch_coverage=1 00:24:30.601 --rc genhtml_function_coverage=1 00:24:30.601 --rc genhtml_legend=1 00:24:30.601 --rc geninfo_all_blocks=1 00:24:30.601 --rc geninfo_unexecuted_blocks=1 00:24:30.601 00:24:30.601 ' 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:30.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.601 --rc genhtml_branch_coverage=1 00:24:30.601 --rc genhtml_function_coverage=1 00:24:30.601 --rc genhtml_legend=1 00:24:30.601 --rc geninfo_all_blocks=1 00:24:30.601 --rc geninfo_unexecuted_blocks=1 00:24:30.601 00:24:30.601 ' 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.601 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.602 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.602 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:30.602 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.602 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.602 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:30.602 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.602 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.602 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.602 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.602 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.602 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.602 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.602 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.602 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.602 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.863 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:30.863 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:30.863 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.863 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.863 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.863 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.863 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.863 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.863 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.863 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.863 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.863 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.863 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:30.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:30.864 15:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:39.006 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:39.006 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:39.006 Found net devices under 0000:31:00.0: cvl_0_0 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:39.006 Found net devices under 0000:31:00.1: cvl_0_1 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.006 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.007 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:39.007 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:39.007 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.007 15:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:39.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:24:39.007 00:24:39.007 --- 10.0.0.2 ping statistics --- 00:24:39.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.007 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:24:39.007 00:24:39.007 --- 10.0.0.1 ping statistics --- 00:24:39.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.007 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3883790 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3883790 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 3883790 ']' 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:39.007 15:36:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.007 [2024-11-06 15:36:56.307835] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:24:39.007 [2024-11-06 15:36:56.307906] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.007 [2024-11-06 15:36:56.408579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:39.007 [2024-11-06 15:36:56.461691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.007 [2024-11-06 15:36:56.461743] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.007 [2024-11-06 15:36:56.461762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.007 [2024-11-06 15:36:56.461769] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.007 [2024-11-06 15:36:56.461775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.007 [2024-11-06 15:36:56.464265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.007 [2024-11-06 15:36:56.464428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.007 [2024-11-06 15:36:56.464585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:39.007 [2024-11-06 15:36:56.464586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.268 15:36:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:39.268 15:36:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:24:39.268 15:36:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:39.529 [2024-11-06 15:36:57.302948] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.529 15:36:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:39.529 15:36:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:39.529 15:36:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.529 15:36:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:39.790 Malloc1 00:24:39.790 15:36:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:40.052 15:36:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:40.052 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.313 [2024-11-06 15:36:58.179355] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.313 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:40.574 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:40.574 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:40.574 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:40.574 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:40.574 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:40.574 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:40.574 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:40.574 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:24:40.574 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:40.574 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:40.574 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:40.574 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:24:40.574 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:40.574 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:40.574 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:40.574 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:40.574 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:40.574 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:24:40.574 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:40.574 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:40.574 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:40.575 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:40.575 15:36:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:40.836 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:40.836 fio-3.35 00:24:40.836 Starting 1 thread 00:24:43.398 00:24:43.399 test: (groupid=0, jobs=1): err= 0: pid=3884469: Wed Nov 6 15:37:01 2024 00:24:43.399 read: IOPS=12.8k, BW=50.1MiB/s (52.6MB/s)(100MiB/2004msec) 00:24:43.399 slat (usec): min=2, max=287, avg= 2.16, stdev= 2.57 00:24:43.399 clat (usec): min=3833, max=9220, avg=5467.90, stdev=732.45 00:24:43.399 lat (usec): min=3870, max=9222, avg=5470.06, stdev=732.52 00:24:43.399 clat percentiles (usec): 00:24:43.399 | 1.00th=[ 4490], 5.00th=[ 4752], 10.00th=[ 4883], 20.00th=[ 5014], 00:24:43.399 | 30.00th=[ 5145], 40.00th=[ 5211], 50.00th=[ 5342], 60.00th=[ 5407], 00:24:43.399 | 70.00th=[ 5538], 80.00th=[ 5669], 90.00th=[ 5932], 95.00th=[ 7439], 00:24:43.399 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[ 8979], 99.95th=[ 9110], 00:24:43.399 | 99.99th=[ 9241] 00:24:43.399 bw ( KiB/s): min=44288, max=54008, per=99.92%, avg=51282.00, stdev=4673.02, samples=4 00:24:43.399 iops : min=11072, max=13502, avg=12820.50, stdev=1168.26, samples=4 00:24:43.399 write: IOPS=12.8k, BW=50.0MiB/s (52.4MB/s)(100MiB/2004msec); 0 zone resets 00:24:43.399 slat (usec): min=2, max=288, avg= 2.23, stdev= 1.97 00:24:43.399 clat (usec): min=2998, max=7981, avg=4447.90, stdev=634.94 00:24:43.399 lat (usec): min=3017, max=7983, avg=4450.13, stdev=635.06 00:24:43.399 clat percentiles (usec): 00:24:43.399 | 1.00th=[ 3621], 5.00th=[ 3851], 10.00th=[ 3949], 20.00th=[ 4080], 00:24:43.399 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4424], 00:24:43.399 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4817], 95.00th=[ 6194], 00:24:43.399 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 7439], 99.95th=[ 7504], 00:24:43.399 | 99.99th=[ 7635] 00:24:43.399 bw ( KiB/s): min=44576, max=53840, per=99.99%, avg=51170.00, stdev=4418.22, samples=4 00:24:43.399 iops : min=11144, max=13460, avg=12792.50, stdev=1104.56, samples=4 00:24:43.399 lat (msec) : 4=6.65%, 10=93.35% 00:24:43.399 cpu : usr=72.34%, sys=26.31%, ctx=40, majf=0, minf=17 00:24:43.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:43.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:43.399 issued rwts: total=25713,25640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.399 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:43.399 00:24:43.399 Run status group 0 (all jobs): 00:24:43.399 READ: bw=50.1MiB/s (52.6MB/s), 50.1MiB/s-50.1MiB/s (52.6MB/s-52.6MB/s), io=100MiB (105MB), run=2004-2004msec 00:24:43.399 WRITE: bw=50.0MiB/s (52.4MB/s), 50.0MiB/s-50.0MiB/s (52.4MB/s-52.4MB/s), io=100MiB (105MB), run=2004-2004msec 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:43.399 15:37:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:43.660 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:43.660 fio-3.35 00:24:43.660 Starting 1 thread 00:24:46.202 00:24:46.202 test: (groupid=0, jobs=1): err= 0: pid=3885192: Wed Nov 6 15:37:03 2024 00:24:46.202 read: IOPS=9702, BW=152MiB/s (159MB/s)(304MiB/2002msec) 00:24:46.202 slat (usec): min=3, max=110, avg= 3.65, stdev= 1.69 00:24:46.202 clat (usec): min=1163, max=15673, avg=7933.00, stdev=1820.19 00:24:46.202 lat (usec): min=1167, max=15690, avg=7936.65, stdev=1820.36 00:24:46.202 clat percentiles (usec): 00:24:46.202 | 1.00th=[ 3949], 5.00th=[ 5080], 10.00th=[ 5669], 20.00th=[ 6325], 00:24:46.202 | 30.00th=[ 6849], 40.00th=[ 7308], 50.00th=[ 7832], 60.00th=[ 8356], 00:24:46.202 | 70.00th=[ 8979], 80.00th=[ 9634], 90.00th=[10290], 95.00th=[10814], 00:24:46.202 | 99.00th=[12256], 99.50th=[12780], 99.90th=[14091], 99.95th=[15008], 00:24:46.202 | 99.99th=[15664] 00:24:46.202 bw ( KiB/s): min=72640, max=82304, per=49.68%, avg=77120.00, stdev=4274.45, samples=4 00:24:46.202 iops : min= 4540, max= 5144, avg=4820.00, stdev=267.15, samples=4 00:24:46.202 write: IOPS=5763, BW=90.1MiB/s (94.4MB/s)(158MiB/1754msec); 0 zone resets 00:24:46.202 slat (usec): min=39, max=455, avg=41.41, stdev= 8.26 00:24:46.202 clat (usec): min=1999, max=16912, avg=9010.42, stdev=1395.76 00:24:46.202 lat (usec): min=2044, max=17061, avg=9051.83, stdev=1398.03 00:24:46.202 clat percentiles (usec): 00:24:46.202 | 1.00th=[ 6456], 5.00th=[ 7111], 10.00th=[ 7439], 20.00th=[ 7898], 00:24:46.202 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9241], 00:24:46.202 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[10683], 95.00th=[11338], 00:24:46.202 | 99.00th=[12649], 99.50th=[13960], 99.90th=[15926], 99.95th=[16581], 00:24:46.202 | 99.99th=[16712] 00:24:46.202 bw ( KiB/s): min=74496, max=85696, per=86.99%, avg=80224.00, stdev=4987.14, samples=4 00:24:46.202 iops : min= 4656, max= 5356, avg=5014.00, stdev=311.70, samples=4 00:24:46.202 lat (msec) : 2=0.01%, 4=0.83%, 10=82.31%, 20=16.86% 00:24:46.202 cpu : usr=84.77%, sys=12.39%, ctx=164, majf=0, minf=33 00:24:46.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:46.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:46.202 issued rwts: total=19424,10110,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.202 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:46.202 00:24:46.202 Run status group 0 (all jobs): 00:24:46.202 READ: bw=152MiB/s (159MB/s), 152MiB/s-152MiB/s (159MB/s-159MB/s), io=304MiB (318MB), run=2002-2002msec 00:24:46.202 WRITE: bw=90.1MiB/s (94.4MB/s), 90.1MiB/s-90.1MiB/s (94.4MB/s-94.4MB/s), io=158MiB (166MB), run=1754-1754msec 00:24:46.202 15:37:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:46.202 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:46.202 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:46.202 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:46.202 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:46.202 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:46.202 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:46.202 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:46.202 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:46.202 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:46.202 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:46.202 rmmod nvme_tcp 00:24:46.202 rmmod nvme_fabrics 00:24:46.202 rmmod nvme_keyring 00:24:46.202 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:46.203 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:46.203 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:46.203 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3883790 ']' 00:24:46.203 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3883790 00:24:46.203 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 3883790 ']' 00:24:46.203 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 3883790 00:24:46.203 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:24:46.203 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:46.203 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3883790 00:24:46.203 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:46.203 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:46.203 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3883790' 00:24:46.203 killing process with pid 3883790 00:24:46.203 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 3883790 00:24:46.203 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 3883790 00:24:46.462 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:46.462 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:46.462 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:46.462 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:46.462 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:46.462 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:46.462 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:46.462 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:46.462 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:46.462 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.462 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.462 15:37:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.371 15:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:48.371 00:24:48.371 real 0m17.991s 00:24:48.371 user 0m58.460s 00:24:48.371 sys 0m7.937s 00:24:48.371 15:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:48.371 15:37:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.371 ************************************ 00:24:48.371 END TEST nvmf_fio_host 00:24:48.371 ************************************ 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.632 ************************************ 00:24:48.632 START TEST nvmf_failover 00:24:48.632 ************************************ 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:48.632 * Looking for test storage... 00:24:48.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:48.632 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:48.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.894 --rc genhtml_branch_coverage=1 00:24:48.894 --rc genhtml_function_coverage=1 00:24:48.894 --rc genhtml_legend=1 00:24:48.894 --rc geninfo_all_blocks=1 00:24:48.894 --rc geninfo_unexecuted_blocks=1 00:24:48.894 00:24:48.894 ' 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:48.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.894 --rc genhtml_branch_coverage=1 00:24:48.894 --rc genhtml_function_coverage=1 00:24:48.894 --rc genhtml_legend=1 00:24:48.894 --rc geninfo_all_blocks=1 00:24:48.894 --rc geninfo_unexecuted_blocks=1 00:24:48.894 00:24:48.894 ' 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:48.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.894 --rc genhtml_branch_coverage=1 00:24:48.894 --rc genhtml_function_coverage=1 00:24:48.894 --rc genhtml_legend=1 00:24:48.894 --rc geninfo_all_blocks=1 00:24:48.894 --rc geninfo_unexecuted_blocks=1 00:24:48.894 00:24:48.894 ' 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:48.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.894 --rc genhtml_branch_coverage=1 00:24:48.894 --rc genhtml_function_coverage=1 00:24:48.894 --rc genhtml_legend=1 00:24:48.894 --rc geninfo_all_blocks=1 00:24:48.894 --rc geninfo_unexecuted_blocks=1 00:24:48.894 00:24:48.894 ' 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.894 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:48.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:48.895 15:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:57.038 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:57.038 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:57.038 Found net devices under 0000:31:00.0: cvl_0_0 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:57.038 Found net devices under 0000:31:00.1: cvl_0_1 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.038 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.039 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:57.039 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:57.039 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.039 15:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:57.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:24:57.039 00:24:57.039 --- 10.0.0.2 ping statistics --- 00:24:57.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.039 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:24:57.039 00:24:57.039 --- 10.0.0.1 ping statistics --- 00:24:57.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.039 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3889923 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3889923 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3889923 ']' 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:57.039 15:37:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:57.039 [2024-11-06 15:37:14.329398] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:24:57.039 [2024-11-06 15:37:14.329463] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.039 [2024-11-06 15:37:14.418627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:57.039 [2024-11-06 15:37:14.470443] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.039 [2024-11-06 15:37:14.470499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.039 [2024-11-06 15:37:14.470508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.039 [2024-11-06 15:37:14.470515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.039 [2024-11-06 15:37:14.470522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.039 [2024-11-06 15:37:14.472414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.039 [2024-11-06 15:37:14.472571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.039 [2024-11-06 15:37:14.472572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:57.300 15:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:57.300 15:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:24:57.300 15:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:57.300 15:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:57.300 15:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:57.300 15:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.300 15:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:57.561 [2024-11-06 15:37:15.369698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.561 15:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:57.822 Malloc0 00:24:57.822 15:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:58.083 15:37:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:58.083 15:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.344 [2024-11-06 15:37:16.197336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.344 15:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:58.609 [2024-11-06 15:37:16.393694] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:58.609 15:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:58.609 [2024-11-06 15:37:16.574235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:58.869 15:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3890356 00:24:58.869 15:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:58.869 15:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:58.870 15:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3890356 /var/tmp/bdevperf.sock 00:24:58.870 15:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3890356 ']' 00:24:58.870 15:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:58.870 15:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:58.870 15:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:58.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:58.870 15:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:58.870 15:37:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:59.811 15:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:59.811 15:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:24:59.811 15:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:59.811 NVMe0n1 00:24:59.811 15:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:00.382 00:25:00.382 15:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3890697 00:25:00.382 15:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:00.382 15:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:01.325 15:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:01.325 [2024-11-06 15:37:19.302419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.325 [2024-11-06 15:37:19.302629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f406d0 is same with the state(6) to be set 00:25:01.587 15:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:04.886 15:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:04.886 00:25:04.886 15:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:04.886 [2024-11-06 15:37:22.799702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f41520 is same with the state(6) to be set 00:25:04.886 [2024-11-06 15:37:22.799739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f41520 is same with the state(6) to be set 00:25:04.886 [2024-11-06 15:37:22.799750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f41520 is same with the state(6) to be set 00:25:04.886 [2024-11-06 15:37:22.799755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f41520 is same with the state(6) to be set 00:25:04.886 [2024-11-06 15:37:22.799760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f41520 is same with the state(6) to be set 00:25:04.886 [2024-11-06 15:37:22.799765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f41520 is same with the state(6) to be set 00:25:04.886 15:37:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:08.187 15:37:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:08.187 [2024-11-06 15:37:25.980278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.187 15:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:09.127 15:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:09.388 [2024-11-06 15:37:27.173527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 [2024-11-06 15:37:27.173814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208c7a0 is same with the state(6) to be set 00:25:09.388 15:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3890697 00:25:15.976 { 00:25:15.976 "results": [ 00:25:15.976 { 00:25:15.976 "job": "NVMe0n1", 00:25:15.976 "core_mask": "0x1", 00:25:15.976 "workload": "verify", 00:25:15.976 "status": "finished", 00:25:15.976 "verify_range": { 00:25:15.976 "start": 0, 00:25:15.976 "length": 16384 00:25:15.976 }, 00:25:15.976 "queue_depth": 128, 00:25:15.976 "io_size": 4096, 00:25:15.976 "runtime": 15.005317, 00:25:15.976 "iops": 12475.911038733804, 00:25:15.976 "mibps": 48.73402749505392, 00:25:15.976 "io_failed": 3693, 00:25:15.976 "io_timeout": 0, 00:25:15.976 "avg_latency_us": 10040.253770984154, 00:25:15.976 "min_latency_us": 549.5466666666666, 00:25:15.976 "max_latency_us": 17476.266666666666 00:25:15.976 } 00:25:15.976 ], 00:25:15.976 "core_count": 1 00:25:15.976 } 00:25:15.976 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3890356 00:25:15.976 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3890356 ']' 00:25:15.976 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3890356 00:25:15.976 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:15.976 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:15.976 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3890356 00:25:15.976 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:15.976 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:15.976 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3890356' 00:25:15.976 killing process with pid 3890356 00:25:15.976 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3890356 00:25:15.976 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3890356 00:25:15.976 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:15.976 [2024-11-06 15:37:16.649840] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:25:15.976 [2024-11-06 15:37:16.649898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890356 ] 00:25:15.976 [2024-11-06 15:37:16.737421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.976 [2024-11-06 15:37:16.772992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.976 Running I/O for 15 seconds... 00:25:15.976 10594.00 IOPS, 41.38 MiB/s [2024-11-06T14:37:33.959Z] [2024-11-06 15:37:19.303752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.976 [2024-11-06 15:37:19.303786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.303797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.976 [2024-11-06 15:37:19.303805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.303814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.976 [2024-11-06 15:37:19.303821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.303830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.976 [2024-11-06 15:37:19.303837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.303846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c73fc0 is same with the state(6) to be set 00:25:15.976 [2024-11-06 15:37:19.303911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.976 [2024-11-06 15:37:19.303921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.303935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.303944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.303954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.303961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.303970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.303978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.303988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.303995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.304004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.304011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.304021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.304034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.304043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.304050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.304060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.304067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.304076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.304083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.304093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.304101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.304110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.304117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.304127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.304134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.304143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.304151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.304160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.304167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.304176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.304184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.304193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.304201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.304210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.304217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.304226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.304233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.304245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.304252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.304262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.304269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.304278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.304286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.304295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.304302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.976 [2024-11-06 15:37:19.304311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.976 [2024-11-06 15:37:19.304318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.977 [2024-11-06 15:37:19.304991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.977 [2024-11-06 15:37:19.304998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.978 [2024-11-06 15:37:19.305083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.978 [2024-11-06 15:37:19.305102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.978 [2024-11-06 15:37:19.305119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.978 [2024-11-06 15:37:19.305136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.978 [2024-11-06 15:37:19.305153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.978 [2024-11-06 15:37:19.305169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.978 [2024-11-06 15:37:19.305185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.978 [2024-11-06 15:37:19.305602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.978 [2024-11-06 15:37:19.305618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.978 [2024-11-06 15:37:19.305635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.978 [2024-11-06 15:37:19.305652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.978 [2024-11-06 15:37:19.305662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.978 [2024-11-06 15:37:19.305669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.305678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:19.305686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.305695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:19.305702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.305715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:19.305722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.305731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:19.305740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.305753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:19.305760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.305770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:19.305777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.305786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:19.305793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.305802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:19.305809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.305819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:19.305826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.305835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:19.305842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.305851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:19.305858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.305868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:19.305875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.305884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.979 [2024-11-06 15:37:19.305891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.305900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.979 [2024-11-06 15:37:19.305907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.305917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.979 [2024-11-06 15:37:19.305925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.305934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.979 [2024-11-06 15:37:19.305941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.305951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:19.305958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.305968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:19.305975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.305984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:19.305991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.306001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:19.306008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.306017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:19.306025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.306034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:19.306041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.306051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:19.306058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.306077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.979 [2024-11-06 15:37:19.306084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.979 [2024-11-06 15:37:19.306091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91800 len:8 PRP1 0x0 PRP2 0x0 00:25:15.979 [2024-11-06 15:37:19.306098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:19.306143] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:15.979 [2024-11-06 15:37:19.306153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:15.979 [2024-11-06 15:37:19.309721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:15.979 [2024-11-06 15:37:19.309748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73fc0 (9): Bad file descriptor 00:25:15.979 [2024-11-06 15:37:19.338022] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:15.979 10728.50 IOPS, 41.91 MiB/s [2024-11-06T14:37:33.962Z] 10959.00 IOPS, 42.81 MiB/s [2024-11-06T14:37:33.962Z] 11484.00 IOPS, 44.86 MiB/s [2024-11-06T14:37:33.962Z] [2024-11-06 15:37:22.800993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:22.801024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:22.801037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:22.801048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:22.801054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:22.801060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:22.801067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:22.801072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:22.801078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:22.801083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:22.801090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:22.801095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:22.801101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:22.801107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:22.801113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:22.801118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:22.801125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:22.801130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:22.801137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:22.801142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:22.801149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:22.801154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.979 [2024-11-06 15:37:22.801160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.979 [2024-11-06 15:37:22.801165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.980 [2024-11-06 15:37:22.801177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.980 [2024-11-06 15:37:22.801189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.980 [2024-11-06 15:37:22.801202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.980 [2024-11-06 15:37:22.801214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.980 [2024-11-06 15:37:22.801226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.980 [2024-11-06 15:37:22.801238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.980 [2024-11-06 15:37:22.801250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.980 [2024-11-06 15:37:22.801261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.980 [2024-11-06 15:37:22.801273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.980 [2024-11-06 15:37:22.801284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.980 [2024-11-06 15:37:22.801297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.980 [2024-11-06 15:37:22.801309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.980 [2024-11-06 15:37:22.801320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.980 [2024-11-06 15:37:22.801332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.980 [2024-11-06 15:37:22.801348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.980 [2024-11-06 15:37:22.801360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.980 [2024-11-06 15:37:22.801372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.980 [2024-11-06 15:37:22.801384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.980 [2024-11-06 15:37:22.801395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.980 [2024-11-06 15:37:22.801407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.980 [2024-11-06 15:37:22.801418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.980 [2024-11-06 15:37:22.801431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.980 [2024-11-06 15:37:22.801442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.980 [2024-11-06 15:37:22.801454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.980 [2024-11-06 15:37:22.801466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.980 [2024-11-06 15:37:22.801477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.980 [2024-11-06 15:37:22.801488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.980 [2024-11-06 15:37:22.801501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.980 [2024-11-06 15:37:22.801513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.980 [2024-11-06 15:37:22.801524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.980 [2024-11-06 15:37:22.801536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.980 [2024-11-06 15:37:22.801547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.980 [2024-11-06 15:37:22.801558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.980 [2024-11-06 15:37:22.801564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.801989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.801996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.802001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.802007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.802012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.802018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.802023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.981 [2024-11-06 15:37:22.802029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.981 [2024-11-06 15:37:22.802034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.982 [2024-11-06 15:37:22.802046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.982 [2024-11-06 15:37:22.802057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.982 [2024-11-06 15:37:22.802068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.982 [2024-11-06 15:37:22.802079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.982 [2024-11-06 15:37:22.802092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.982 [2024-11-06 15:37:22.802103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.982 [2024-11-06 15:37:22.802114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.982 [2024-11-06 15:37:22.802125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.982 [2024-11-06 15:37:22.802136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.982 [2024-11-06 15:37:22.802147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.982 [2024-11-06 15:37:22.802169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43384 len:8 PRP1 0x0 PRP2 0x0 00:25:15.982 [2024-11-06 15:37:22.802175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.982 [2024-11-06 15:37:22.802187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.982 [2024-11-06 15:37:22.802191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43392 len:8 PRP1 0x0 PRP2 0x0 00:25:15.982 [2024-11-06 15:37:22.802196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.982 [2024-11-06 15:37:22.802206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.982 [2024-11-06 15:37:22.802210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43400 len:8 PRP1 0x0 PRP2 0x0 00:25:15.982 [2024-11-06 15:37:22.802215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.982 [2024-11-06 15:37:22.802224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.982 [2024-11-06 15:37:22.802228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43408 len:8 PRP1 0x0 PRP2 0x0 00:25:15.982 [2024-11-06 15:37:22.802233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.982 [2024-11-06 15:37:22.802244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.982 [2024-11-06 15:37:22.802249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43416 len:8 PRP1 0x0 PRP2 0x0 00:25:15.982 [2024-11-06 15:37:22.802254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.982 [2024-11-06 15:37:22.802263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.982 [2024-11-06 15:37:22.802267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43424 len:8 PRP1 0x0 PRP2 0x0 00:25:15.982 [2024-11-06 15:37:22.802272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.982 [2024-11-06 15:37:22.802281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.982 [2024-11-06 15:37:22.802286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43432 len:8 PRP1 0x0 PRP2 0x0 00:25:15.982 [2024-11-06 15:37:22.802291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.982 [2024-11-06 15:37:22.802301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.982 [2024-11-06 15:37:22.802305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43440 len:8 PRP1 0x0 PRP2 0x0 00:25:15.982 [2024-11-06 15:37:22.802310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.982 [2024-11-06 15:37:22.802319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.982 [2024-11-06 15:37:22.802323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43448 len:8 PRP1 0x0 PRP2 0x0 00:25:15.982 [2024-11-06 15:37:22.802328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.982 [2024-11-06 15:37:22.802337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.982 [2024-11-06 15:37:22.802341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43456 len:8 PRP1 0x0 PRP2 0x0 00:25:15.982 [2024-11-06 15:37:22.802346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.982 [2024-11-06 15:37:22.802355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.982 [2024-11-06 15:37:22.802359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43464 len:8 PRP1 0x0 PRP2 0x0 00:25:15.982 [2024-11-06 15:37:22.802364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.982 [2024-11-06 15:37:22.802373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.982 [2024-11-06 15:37:22.802378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43472 len:8 PRP1 0x0 PRP2 0x0 00:25:15.982 [2024-11-06 15:37:22.802383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.982 [2024-11-06 15:37:22.802395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.982 [2024-11-06 15:37:22.802399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43480 len:8 PRP1 0x0 PRP2 0x0 00:25:15.982 [2024-11-06 15:37:22.802404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.982 [2024-11-06 15:37:22.802413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.982 [2024-11-06 15:37:22.802417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43488 len:8 PRP1 0x0 PRP2 0x0 00:25:15.982 [2024-11-06 15:37:22.802422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.982 [2024-11-06 15:37:22.802431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.982 [2024-11-06 15:37:22.802435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42720 len:8 PRP1 0x0 PRP2 0x0 00:25:15.982 [2024-11-06 15:37:22.802440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.982 [2024-11-06 15:37:22.802449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.982 [2024-11-06 15:37:22.802453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42728 len:8 PRP1 0x0 PRP2 0x0 00:25:15.982 [2024-11-06 15:37:22.802458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.982 [2024-11-06 15:37:22.802467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.982 [2024-11-06 15:37:22.802471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42736 len:8 PRP1 0x0 PRP2 0x0 00:25:15.982 [2024-11-06 15:37:22.802477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.982 [2024-11-06 15:37:22.802482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.982 [2024-11-06 15:37:22.802486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.982 [2024-11-06 15:37:22.802490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42744 len:8 PRP1 0x0 PRP2 0x0 00:25:15.982 [2024-11-06 15:37:22.802495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:22.802501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.983 [2024-11-06 15:37:22.802505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.983 [2024-11-06 15:37:22.802509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42752 len:8 PRP1 0x0 PRP2 0x0 00:25:15.983 [2024-11-06 15:37:22.802514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:22.802519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.983 [2024-11-06 15:37:22.802523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.983 [2024-11-06 15:37:22.802527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42760 len:8 PRP1 0x0 PRP2 0x0 00:25:15.983 [2024-11-06 15:37:22.802533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:22.802538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.983 [2024-11-06 15:37:22.802542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.983 [2024-11-06 15:37:22.802546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42768 len:8 PRP1 0x0 PRP2 0x0 00:25:15.983 [2024-11-06 15:37:22.802551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:22.802557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.983 [2024-11-06 15:37:22.802561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.983 [2024-11-06 15:37:22.802565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42776 len:8 PRP1 0x0 PRP2 0x0 00:25:15.983 [2024-11-06 15:37:22.802570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:22.802575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.983 [2024-11-06 15:37:22.802579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.983 [2024-11-06 15:37:22.802583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42784 len:8 PRP1 0x0 PRP2 0x0 00:25:15.983 [2024-11-06 15:37:22.802588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:22.802594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.983 [2024-11-06 15:37:22.802598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.983 [2024-11-06 15:37:22.802602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42792 len:8 PRP1 0x0 PRP2 0x0 00:25:15.983 [2024-11-06 15:37:22.802607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:22.802612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.983 [2024-11-06 15:37:22.802616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.983 [2024-11-06 15:37:22.802620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42800 len:8 PRP1 0x0 PRP2 0x0 00:25:15.983 [2024-11-06 15:37:22.802625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:22.802630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.983 [2024-11-06 15:37:22.802634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.983 [2024-11-06 15:37:22.802638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42808 len:8 PRP1 0x0 PRP2 0x0 00:25:15.983 [2024-11-06 15:37:22.802643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:22.802648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.983 [2024-11-06 15:37:22.802652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.983 [2024-11-06 15:37:22.802656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42816 len:8 PRP1 0x0 PRP2 0x0 00:25:15.983 [2024-11-06 15:37:22.802661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:22.802667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.983 [2024-11-06 15:37:22.802670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.983 [2024-11-06 15:37:22.813001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42824 len:8 PRP1 0x0 PRP2 0x0 00:25:15.983 [2024-11-06 15:37:22.813028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:22.813042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.983 [2024-11-06 15:37:22.813048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.983 [2024-11-06 15:37:22.813054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42832 len:8 PRP1 0x0 PRP2 0x0 00:25:15.983 [2024-11-06 15:37:22.813061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:22.813069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.983 [2024-11-06 15:37:22.813074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.983 [2024-11-06 15:37:22.813080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42840 len:8 PRP1 0x0 PRP2 0x0 00:25:15.983 [2024-11-06 15:37:22.813086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:22.813093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.983 [2024-11-06 15:37:22.813099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.983 [2024-11-06 15:37:22.813104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42848 len:8 PRP1 0x0 PRP2 0x0 00:25:15.983 [2024-11-06 15:37:22.813111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:22.813118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.983 [2024-11-06 15:37:22.813123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.983 [2024-11-06 15:37:22.813129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42856 len:8 PRP1 0x0 PRP2 0x0 00:25:15.983 [2024-11-06 15:37:22.813135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:22.813178] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:15.983 [2024-11-06 15:37:22.813206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.983 [2024-11-06 15:37:22.813214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:22.813224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.983 [2024-11-06 15:37:22.813231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:22.813238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.983 [2024-11-06 15:37:22.813245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:22.813252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.983 [2024-11-06 15:37:22.813259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:22.813266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:15.983 [2024-11-06 15:37:22.813300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73fc0 (9): Bad file descriptor 00:25:15.983 [2024-11-06 15:37:22.816624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:15.983 [2024-11-06 15:37:22.838563] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:15.983 11670.00 IOPS, 45.59 MiB/s [2024-11-06T14:37:33.966Z] 11909.67 IOPS, 46.52 MiB/s [2024-11-06T14:37:33.966Z] 12073.57 IOPS, 47.16 MiB/s [2024-11-06T14:37:33.966Z] 12191.50 IOPS, 47.62 MiB/s [2024-11-06T14:37:33.966Z] [2024-11-06 15:37:27.175773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.983 [2024-11-06 15:37:27.175802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:27.175815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.983 [2024-11-06 15:37:27.175820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:27.175827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:109208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.983 [2024-11-06 15:37:27.175833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:27.175840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.983 [2024-11-06 15:37:27.175845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:27.175852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.983 [2024-11-06 15:37:27.175857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:27.175863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.983 [2024-11-06 15:37:27.175869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:27.175875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.983 [2024-11-06 15:37:27.175880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:27.175886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.983 [2024-11-06 15:37:27.175892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.983 [2024-11-06 15:37:27.175898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.983 [2024-11-06 15:37:27.175903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.175909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.175914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.175920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.175925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.175936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.175941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.175948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.175952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.175959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.175964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.175970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.175975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.175982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.175987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.175994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.175999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.984 [2024-11-06 15:37:27.176312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.984 [2024-11-06 15:37:27.176319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.985 [2024-11-06 15:37:27.176587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.985 [2024-11-06 15:37:27.176599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.985 [2024-11-06 15:37:27.176611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.985 [2024-11-06 15:37:27.176622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.985 [2024-11-06 15:37:27.176634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.985 [2024-11-06 15:37:27.176645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.985 [2024-11-06 15:37:27.176656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.985 [2024-11-06 15:37:27.176669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.985 [2024-11-06 15:37:27.176684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.985 [2024-11-06 15:37:27.176695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.985 [2024-11-06 15:37:27.176706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.985 [2024-11-06 15:37:27.176718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.985 [2024-11-06 15:37:27.176729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.985 [2024-11-06 15:37:27.176741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.985 [2024-11-06 15:37:27.176757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.985 [2024-11-06 15:37:27.176768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.985 [2024-11-06 15:37:27.176775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.985 [2024-11-06 15:37:27.176780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.176786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.986 [2024-11-06 15:37:27.176791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.176797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.986 [2024-11-06 15:37:27.176802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.176809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.986 [2024-11-06 15:37:27.176815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.176821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.986 [2024-11-06 15:37:27.176826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.176833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.986 [2024-11-06 15:37:27.176838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.176844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.986 [2024-11-06 15:37:27.176849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.176856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.986 [2024-11-06 15:37:27.176861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.176867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.986 [2024-11-06 15:37:27.176872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.176878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.986 [2024-11-06 15:37:27.176883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.176890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.986 [2024-11-06 15:37:27.176894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.176901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.986 [2024-11-06 15:37:27.176906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.176912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.986 [2024-11-06 15:37:27.176917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.176923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.986 [2024-11-06 15:37:27.176929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.176936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.986 [2024-11-06 15:37:27.176941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.176947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.986 [2024-11-06 15:37:27.176952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.176960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.986 [2024-11-06 15:37:27.176965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.176971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.986 [2024-11-06 15:37:27.176976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.176983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.986 [2024-11-06 15:37:27.176988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.176994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.986 [2024-11-06 15:37:27.176999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.177005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.986 [2024-11-06 15:37:27.177010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.177017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.986 [2024-11-06 15:37:27.177022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.177029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.986 [2024-11-06 15:37:27.177034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.177041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.986 [2024-11-06 15:37:27.177045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.177054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.986 [2024-11-06 15:37:27.177059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.177066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.986 [2024-11-06 15:37:27.177071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.177078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.986 [2024-11-06 15:37:27.177082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.177089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.986 [2024-11-06 15:37:27.177094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.177100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.986 [2024-11-06 15:37:27.177106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.177112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.986 [2024-11-06 15:37:27.177118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.177124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.986 [2024-11-06 15:37:27.177130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.177136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.986 [2024-11-06 15:37:27.177141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.177147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.986 [2024-11-06 15:37:27.177152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.177160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.986 [2024-11-06 15:37:27.177165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.177171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.986 [2024-11-06 15:37:27.177176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.177193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.986 [2024-11-06 15:37:27.177199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109896 len:8 PRP1 0x0 PRP2 0x0 00:25:15.986 [2024-11-06 15:37:27.177204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.177212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.986 [2024-11-06 15:37:27.177216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.986 [2024-11-06 15:37:27.177221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109904 len:8 PRP1 0x0 PRP2 0x0 00:25:15.986 [2024-11-06 15:37:27.177226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.177231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.986 [2024-11-06 15:37:27.177235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.986 [2024-11-06 15:37:27.177239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109912 len:8 PRP1 0x0 PRP2 0x0 00:25:15.986 [2024-11-06 15:37:27.177244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.986 [2024-11-06 15:37:27.177250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.986 [2024-11-06 15:37:27.177254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.986 [2024-11-06 15:37:27.177258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109920 len:8 PRP1 0x0 PRP2 0x0 00:25:15.987 [2024-11-06 15:37:27.177263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.987 [2024-11-06 15:37:27.177270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.987 [2024-11-06 15:37:27.177274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.987 [2024-11-06 15:37:27.177278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109928 len:8 PRP1 0x0 PRP2 0x0 00:25:15.987 [2024-11-06 15:37:27.177283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.987 [2024-11-06 15:37:27.177288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.987 [2024-11-06 15:37:27.177292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.987 [2024-11-06 15:37:27.177297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109936 len:8 PRP1 0x0 PRP2 0x0 00:25:15.987 [2024-11-06 15:37:27.177302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.987 [2024-11-06 15:37:27.177307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.987 [2024-11-06 15:37:27.177311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.987 [2024-11-06 15:37:27.177316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109944 len:8 PRP1 0x0 PRP2 0x0 00:25:15.987 [2024-11-06 15:37:27.177321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.987 [2024-11-06 15:37:27.177326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.987 [2024-11-06 15:37:27.177330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.987 [2024-11-06 15:37:27.177334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109952 len:8 PRP1 0x0 PRP2 0x0 00:25:15.987 [2024-11-06 15:37:27.177339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.987 [2024-11-06 15:37:27.177345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.987 [2024-11-06 15:37:27.177349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.987 [2024-11-06 15:37:27.177353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109960 len:8 PRP1 0x0 PRP2 0x0 00:25:15.987 [2024-11-06 15:37:27.177358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.987 [2024-11-06 15:37:27.177363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.987 [2024-11-06 15:37:27.177367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.987 [2024-11-06 15:37:27.177371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109968 len:8 PRP1 0x0 PRP2 0x0 00:25:15.987 [2024-11-06 15:37:27.177376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.987 [2024-11-06 15:37:27.177409] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:15.987 [2024-11-06 15:37:27.177427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.987 [2024-11-06 15:37:27.177432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.987 [2024-11-06 15:37:27.177440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.987 [2024-11-06 15:37:27.177445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.987 [2024-11-06 15:37:27.177451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.987 [2024-11-06 15:37:27.177458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.987 [2024-11-06 15:37:27.188102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.987 [2024-11-06 15:37:27.188126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.987 [2024-11-06 15:37:27.188134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:15.987 [2024-11-06 15:37:27.188164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c73fc0 (9): Bad file descriptor 00:25:15.987 [2024-11-06 15:37:27.190634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:15.987 [2024-11-06 15:37:27.213596] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:15.987 12212.89 IOPS, 47.71 MiB/s [2024-11-06T14:37:33.970Z] 12295.30 IOPS, 48.03 MiB/s [2024-11-06T14:37:33.970Z] 12361.00 IOPS, 48.29 MiB/s [2024-11-06T14:37:33.970Z] 12377.08 IOPS, 48.35 MiB/s [2024-11-06T14:37:33.970Z] 12420.85 IOPS, 48.52 MiB/s [2024-11-06T14:37:33.970Z] 12440.79 IOPS, 48.60 MiB/s 00:25:15.987 Latency(us) 00:25:15.987 [2024-11-06T14:37:33.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.987 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:15.987 Verification LBA range: start 0x0 length 0x4000 00:25:15.987 NVMe0n1 : 15.01 12475.91 48.73 246.11 0.00 10040.25 549.55 17476.27 00:25:15.987 [2024-11-06T14:37:33.970Z] =================================================================================================================== 00:25:15.987 [2024-11-06T14:37:33.970Z] Total : 12475.91 48.73 246.11 0.00 10040.25 549.55 17476.27 00:25:15.987 Received shutdown signal, test time was about 15.000000 seconds 00:25:15.987 00:25:15.987 Latency(us) 00:25:15.987 [2024-11-06T14:37:33.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.987 [2024-11-06T14:37:33.970Z] =================================================================================================================== 00:25:15.987 [2024-11-06T14:37:33.970Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:15.987 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:15.987 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:15.987 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:15.987 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3893572 00:25:15.987 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3893572 /var/tmp/bdevperf.sock 00:25:15.987 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:15.987 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3893572 ']' 00:25:15.987 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:15.987 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:15.987 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:15.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:15.987 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:15.987 15:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:16.558 15:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:16.558 15:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:16.558 15:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:16.558 [2024-11-06 15:37:34.476634] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:16.558 15:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:16.819 [2024-11-06 15:37:34.661084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:16.819 15:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.079 NVMe0n1 00:25:17.079 15:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.340 00:25:17.340 15:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.600 00:25:17.600 15:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:17.600 15:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:17.860 15:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:18.120 15:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:21.454 15:37:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:21.454 15:37:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:21.454 15:37:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3894723 00:25:21.454 15:37:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:21.454 15:37:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3894723 00:25:22.444 { 00:25:22.444 "results": [ 00:25:22.444 { 00:25:22.444 "job": "NVMe0n1", 00:25:22.444 "core_mask": "0x1", 00:25:22.444 "workload": "verify", 00:25:22.444 "status": "finished", 00:25:22.444 "verify_range": { 00:25:22.444 "start": 0, 00:25:22.444 "length": 16384 00:25:22.444 }, 00:25:22.444 "queue_depth": 128, 00:25:22.444 "io_size": 4096, 00:25:22.444 "runtime": 1.003691, 00:25:22.444 "iops": 13020.939711524763, 00:25:22.444 "mibps": 50.863045748143605, 00:25:22.444 "io_failed": 0, 00:25:22.444 "io_timeout": 0, 00:25:22.444 "avg_latency_us": 9795.21908230673, 00:25:22.444 "min_latency_us": 907.9466666666667, 00:25:22.444 "max_latency_us": 9666.56 00:25:22.444 } 00:25:22.444 ], 00:25:22.444 "core_count": 1 00:25:22.444 } 00:25:22.444 15:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:22.444 [2024-11-06 15:37:33.527029] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:25:22.444 [2024-11-06 15:37:33.527088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3893572 ] 00:25:22.444 [2024-11-06 15:37:33.610645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.444 [2024-11-06 15:37:33.639572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.444 [2024-11-06 15:37:35.849264] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:22.444 [2024-11-06 15:37:35.849303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.444 [2024-11-06 15:37:35.849312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.444 [2024-11-06 15:37:35.849319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.444 [2024-11-06 15:37:35.849325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.444 [2024-11-06 15:37:35.849331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.444 [2024-11-06 15:37:35.849336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.444 [2024-11-06 15:37:35.849342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.444 [2024-11-06 15:37:35.849347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.444 [2024-11-06 15:37:35.849353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:22.444 [2024-11-06 15:37:35.849373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:22.444 [2024-11-06 15:37:35.849384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc2fc0 (9): Bad file descriptor 00:25:22.444 [2024-11-06 15:37:35.859346] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:22.444 Running I/O for 1 seconds... 00:25:22.444 12941.00 IOPS, 50.55 MiB/s 00:25:22.444 Latency(us) 00:25:22.444 [2024-11-06T14:37:40.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.444 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:22.444 Verification LBA range: start 0x0 length 0x4000 00:25:22.444 NVMe0n1 : 1.00 13020.94 50.86 0.00 0.00 9795.22 907.95 9666.56 00:25:22.444 [2024-11-06T14:37:40.427Z] =================================================================================================================== 00:25:22.444 [2024-11-06T14:37:40.427Z] Total : 13020.94 50.86 0.00 0.00 9795.22 907.95 9666.56 00:25:22.444 15:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:22.444 15:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:22.444 15:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.705 15:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:22.705 15:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:22.966 15:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.966 15:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:26.267 15:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:26.267 15:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:26.267 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3893572 00:25:26.267 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3893572 ']' 00:25:26.267 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3893572 00:25:26.267 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:26.267 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:26.267 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3893572 00:25:26.267 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:26.267 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:26.267 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3893572' 00:25:26.267 killing process with pid 3893572 00:25:26.267 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3893572 00:25:26.267 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3893572 00:25:26.528 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:26.528 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:26.528 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:26.528 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:26.528 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:26.528 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:26.528 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:26.528 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:26.528 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:26.528 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:26.528 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:26.528 rmmod nvme_tcp 00:25:26.528 rmmod nvme_fabrics 00:25:26.789 rmmod nvme_keyring 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3889923 ']' 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3889923 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3889923 ']' 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3889923 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3889923 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3889923' 00:25:26.789 killing process with pid 3889923 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3889923 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3889923 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.789 15:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.332 15:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:29.332 00:25:29.332 real 0m40.386s 00:25:29.332 user 2m3.887s 00:25:29.332 sys 0m8.740s 00:25:29.332 15:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:29.332 15:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:29.332 ************************************ 00:25:29.332 END TEST nvmf_failover 00:25:29.332 ************************************ 00:25:29.332 15:37:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:29.332 15:37:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:29.332 15:37:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:29.332 15:37:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.332 ************************************ 00:25:29.332 START TEST nvmf_host_discovery 00:25:29.332 ************************************ 00:25:29.332 15:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:29.332 * Looking for test storage... 00:25:29.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:29.332 15:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:29.332 15:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:25:29.332 15:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:29.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.332 --rc genhtml_branch_coverage=1 00:25:29.332 --rc genhtml_function_coverage=1 00:25:29.332 --rc genhtml_legend=1 00:25:29.332 --rc geninfo_all_blocks=1 00:25:29.332 --rc geninfo_unexecuted_blocks=1 00:25:29.332 00:25:29.332 ' 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:29.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.332 --rc genhtml_branch_coverage=1 00:25:29.332 --rc genhtml_function_coverage=1 00:25:29.332 --rc genhtml_legend=1 00:25:29.332 --rc geninfo_all_blocks=1 00:25:29.332 --rc geninfo_unexecuted_blocks=1 00:25:29.332 00:25:29.332 ' 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:29.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.332 --rc genhtml_branch_coverage=1 00:25:29.332 --rc genhtml_function_coverage=1 00:25:29.332 --rc genhtml_legend=1 00:25:29.332 --rc geninfo_all_blocks=1 00:25:29.332 --rc geninfo_unexecuted_blocks=1 00:25:29.332 00:25:29.332 ' 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:29.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.332 --rc genhtml_branch_coverage=1 00:25:29.332 --rc genhtml_function_coverage=1 00:25:29.332 --rc genhtml_legend=1 00:25:29.332 --rc geninfo_all_blocks=1 00:25:29.332 --rc geninfo_unexecuted_blocks=1 00:25:29.332 00:25:29.332 ' 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:29.332 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:29.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:29.333 15:37:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:37.484 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:37.484 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.484 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:37.485 Found net devices under 0000:31:00.0: cvl_0_0 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:37.485 Found net devices under 0000:31:00.1: cvl_0_1 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:37.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:25:37.485 00:25:37.485 --- 10.0.0.2 ping statistics --- 00:25:37.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.485 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:25:37.485 00:25:37.485 --- 10.0.0.1 ping statistics --- 00:25:37.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.485 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3899949 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3899949 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 3899949 ']' 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:37.485 15:37:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.485 [2024-11-06 15:37:54.835792] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:25:37.485 [2024-11-06 15:37:54.835857] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.485 [2024-11-06 15:37:54.935997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.485 [2024-11-06 15:37:54.986136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.485 [2024-11-06 15:37:54.986184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.485 [2024-11-06 15:37:54.986193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.485 [2024-11-06 15:37:54.986200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.485 [2024-11-06 15:37:54.986206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.485 [2024-11-06 15:37:54.987010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.746 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:37.746 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:25:37.746 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:37.746 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:37.746 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.746 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.746 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:37.746 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.746 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.746 [2024-11-06 15:37:55.708392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.746 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.746 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:37.746 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.746 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.746 [2024-11-06 15:37:55.720660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:37.746 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.746 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:37.746 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.746 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.006 null0 00:25:38.006 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.006 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:38.006 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.006 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.006 null1 00:25:38.006 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.006 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:38.006 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.006 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.006 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.006 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3900126 00:25:38.006 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3900126 /tmp/host.sock 00:25:38.006 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:38.006 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 3900126 ']' 00:25:38.006 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:25:38.006 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:38.006 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:38.006 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:38.006 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:38.006 15:37:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.006 [2024-11-06 15:37:55.817386] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:25:38.006 [2024-11-06 15:37:55.817446] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900126 ] 00:25:38.006 [2024-11-06 15:37:55.909946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.006 [2024-11-06 15:37:55.962250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.948 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:38.949 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.210 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:39.210 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:39.210 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.210 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.210 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.210 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.210 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:39.210 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:39.210 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.210 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:39.210 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:39.210 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.210 15:37:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.210 [2024-11-06 15:37:56.999991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:39.210 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:39.211 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:39.211 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:39.211 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:39.211 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.211 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:39.211 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.211 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:39.211 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:39.211 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.471 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:25:39.471 15:37:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:25:39.731 [2024-11-06 15:37:57.706827] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:39.731 [2024-11-06 15:37:57.706857] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:39.731 [2024-11-06 15:37:57.706873] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:39.991 [2024-11-06 15:37:57.795166] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:39.991 [2024-11-06 15:37:57.896076] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:39.991 [2024-11-06 15:37:57.897381] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1dfa8c0:1 started. 00:25:39.991 [2024-11-06 15:37:57.899286] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:39.991 [2024-11-06 15:37:57.899316] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:39.991 [2024-11-06 15:37:57.906698] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1dfa8c0 was disconnected and freed. delete nvme_qpair. 00:25:40.251 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:40.251 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:40.251 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:40.251 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:40.251 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:40.251 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.251 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:40.251 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.251 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.512 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:40.772 [2024-11-06 15:37:58.685613] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1dfaaa0:1 started. 00:25:40.772 [2024-11-06 15:37:58.688720] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1dfaaa0 was disconnected and freed. delete nvme_qpair. 00:25:40.772 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.772 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:40.772 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:40.772 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:40.772 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:40.772 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:40.772 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:40.772 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:40.772 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:40.772 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:40.772 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:40.772 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:40.772 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:40.772 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.772 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.772 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.032 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:41.032 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:41.032 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:41.032 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:41.032 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:41.032 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.032 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.033 [2024-11-06 15:37:58.776511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:41.033 [2024-11-06 15:37:58.777207] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:41.033 [2024-11-06 15:37:58.777229] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.033 [2024-11-06 15:37:58.864957] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:41.033 15:37:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:25:41.033 [2024-11-06 15:37:58.965867] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:41.033 [2024-11-06 15:37:58.965904] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:41.033 [2024-11-06 15:37:58.965912] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:41.033 [2024-11-06 15:37:58.965917] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:41.973 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:41.973 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:41.973 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:41.973 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:41.973 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:41.973 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.973 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:41.973 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.973 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:41.973 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.233 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:42.234 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:42.234 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:42.234 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:42.234 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:42.234 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:42.234 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:42.234 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:42.234 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:42.234 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:42.234 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:42.234 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:42.234 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.234 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.234 15:37:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.234 [2024-11-06 15:38:00.039936] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:42.234 [2024-11-06 15:38:00.039956] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:42.234 [2024-11-06 15:38:00.040158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.234 [2024-11-06 15:38:00.040172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.234 [2024-11-06 15:38:00.040179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.234 [2024-11-06 15:38:00.040185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.234 [2024-11-06 15:38:00.040191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.234 [2024-11-06 15:38:00.040197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.234 [2024-11-06 15:38:00.040202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.234 [2024-11-06 15:38:00.040207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.234 [2024-11-06 15:38:00.040213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcafd0 is same with the state(6) to be set 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:42.234 [2024-11-06 15:38:00.050172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcafd0 (9): Bad file descriptor 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.234 [2024-11-06 15:38:00.060206] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:42.234 [2024-11-06 15:38:00.060217] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:42.234 [2024-11-06 15:38:00.060221] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.234 [2024-11-06 15:38:00.060225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.234 [2024-11-06 15:38:00.060239] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.234 [2024-11-06 15:38:00.060534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.234 [2024-11-06 15:38:00.060545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dcafd0 with addr=10.0.0.2, port=4420 00:25:42.234 [2024-11-06 15:38:00.060551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcafd0 is same with the state(6) to be set 00:25:42.234 [2024-11-06 15:38:00.060561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcafd0 (9): Bad file descriptor 00:25:42.234 [2024-11-06 15:38:00.060569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.234 [2024-11-06 15:38:00.060574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.234 [2024-11-06 15:38:00.060581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.234 [2024-11-06 15:38:00.060586] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.234 [2024-11-06 15:38:00.060590] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.234 [2024-11-06 15:38:00.060593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.234 [2024-11-06 15:38:00.070267] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:42.234 [2024-11-06 15:38:00.070278] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:42.234 [2024-11-06 15:38:00.070282] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.234 [2024-11-06 15:38:00.070285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.234 [2024-11-06 15:38:00.070297] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.234 [2024-11-06 15:38:00.070490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.234 [2024-11-06 15:38:00.070499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dcafd0 with addr=10.0.0.2, port=4420 00:25:42.234 [2024-11-06 15:38:00.070505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcafd0 is same with the state(6) to be set 00:25:42.234 [2024-11-06 15:38:00.070516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcafd0 (9): Bad file descriptor 00:25:42.234 [2024-11-06 15:38:00.070525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.234 [2024-11-06 15:38:00.070530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.234 [2024-11-06 15:38:00.070536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.234 [2024-11-06 15:38:00.070541] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.234 [2024-11-06 15:38:00.070544] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.234 [2024-11-06 15:38:00.070547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.234 [2024-11-06 15:38:00.080325] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:42.234 [2024-11-06 15:38:00.080334] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:42.234 [2024-11-06 15:38:00.080337] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.234 [2024-11-06 15:38:00.080341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.234 [2024-11-06 15:38:00.080351] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.234 [2024-11-06 15:38:00.080661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.234 [2024-11-06 15:38:00.080669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dcafd0 with addr=10.0.0.2, port=4420 00:25:42.234 [2024-11-06 15:38:00.080674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcafd0 is same with the state(6) to be set 00:25:42.234 [2024-11-06 15:38:00.080682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcafd0 (9): Bad file descriptor 00:25:42.234 [2024-11-06 15:38:00.080690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.234 [2024-11-06 15:38:00.080694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.234 [2024-11-06 15:38:00.080700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.234 [2024-11-06 15:38:00.080704] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.234 [2024-11-06 15:38:00.080707] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.234 [2024-11-06 15:38:00.080710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.234 [2024-11-06 15:38:00.090380] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:42.234 [2024-11-06 15:38:00.090391] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:42.234 [2024-11-06 15:38:00.090394] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.234 [2024-11-06 15:38:00.090397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.234 [2024-11-06 15:38:00.090408] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.234 [2024-11-06 15:38:00.090692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.234 [2024-11-06 15:38:00.090701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dcafd0 with addr=10.0.0.2, port=4420 00:25:42.234 [2024-11-06 15:38:00.090710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcafd0 is same with the state(6) to be set 00:25:42.234 [2024-11-06 15:38:00.090718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcafd0 (9): Bad file descriptor 00:25:42.234 [2024-11-06 15:38:00.090725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.234 [2024-11-06 15:38:00.090730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.234 [2024-11-06 15:38:00.090735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.234 [2024-11-06 15:38:00.090739] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.234 [2024-11-06 15:38:00.090743] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.234 [2024-11-06 15:38:00.090751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.234 [2024-11-06 15:38:00.100437] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:42.234 [2024-11-06 15:38:00.100446] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:42.234 [2024-11-06 15:38:00.100450] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.234 [2024-11-06 15:38:00.100454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.234 [2024-11-06 15:38:00.100466] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.234 [2024-11-06 15:38:00.100657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.234 [2024-11-06 15:38:00.100665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dcafd0 with addr=10.0.0.2, port=4420 00:25:42.234 [2024-11-06 15:38:00.100670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcafd0 is same with the state(6) to be set 00:25:42.234 [2024-11-06 15:38:00.100678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcafd0 (9): Bad file descriptor 00:25:42.234 [2024-11-06 15:38:00.100685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.234 [2024-11-06 15:38:00.100690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.234 [2024-11-06 15:38:00.100695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.234 [2024-11-06 15:38:00.100700] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.234 [2024-11-06 15:38:00.100703] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.234 [2024-11-06 15:38:00.100706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.234 [2024-11-06 15:38:00.110494] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:42.234 [2024-11-06 15:38:00.110503] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:42.234 [2024-11-06 15:38:00.110507] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.234 [2024-11-06 15:38:00.110510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.234 [2024-11-06 15:38:00.110520] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.234 [2024-11-06 15:38:00.110957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.234 [2024-11-06 15:38:00.110988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dcafd0 with addr=10.0.0.2, port=4420 00:25:42.234 [2024-11-06 15:38:00.110996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcafd0 is same with the state(6) to be set 00:25:42.234 [2024-11-06 15:38:00.111011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcafd0 (9): Bad file descriptor 00:25:42.234 [2024-11-06 15:38:00.111028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.234 [2024-11-06 15:38:00.111034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.234 [2024-11-06 15:38:00.111039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.234 [2024-11-06 15:38:00.111044] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.234 [2024-11-06 15:38:00.111048] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.234 [2024-11-06 15:38:00.111052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.234 [2024-11-06 15:38:00.120551] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:42.234 [2024-11-06 15:38:00.120563] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:42.234 [2024-11-06 15:38:00.120566] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.234 [2024-11-06 15:38:00.120570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.234 [2024-11-06 15:38:00.120581] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.234 [2024-11-06 15:38:00.120965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.234 [2024-11-06 15:38:00.120994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dcafd0 with addr=10.0.0.2, port=4420 00:25:42.234 [2024-11-06 15:38:00.121003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcafd0 is same with the state(6) to be set 00:25:42.234 [2024-11-06 15:38:00.121018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcafd0 (9): Bad file descriptor 00:25:42.234 [2024-11-06 15:38:00.121043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.234 [2024-11-06 15:38:00.121049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.234 [2024-11-06 15:38:00.121055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.234 [2024-11-06 15:38:00.121060] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.234 [2024-11-06 15:38:00.121064] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.234 [2024-11-06 15:38:00.121067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.234 [2024-11-06 15:38:00.128055] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:42.234 [2024-11-06 15:38:00.128071] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:42.234 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.235 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.494 15:38:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.875 [2024-11-06 15:38:01.476923] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:43.875 [2024-11-06 15:38:01.476936] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:43.875 [2024-11-06 15:38:01.476945] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:43.875 [2024-11-06 15:38:01.565198] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:44.135 [2024-11-06 15:38:01.874627] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:44.135 [2024-11-06 15:38:01.875348] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1de21d0:1 started. 00:25:44.135 [2024-11-06 15:38:01.876714] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:44.136 [2024-11-06 15:38:01.876737] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.136 [2024-11-06 15:38:01.885946] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1de21d0 was disconnected and freed. delete nvme_qpair. 00:25:44.136 request: 00:25:44.136 { 00:25:44.136 "name": "nvme", 00:25:44.136 "trtype": "tcp", 00:25:44.136 "traddr": "10.0.0.2", 00:25:44.136 "adrfam": "ipv4", 00:25:44.136 "trsvcid": "8009", 00:25:44.136 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:44.136 "wait_for_attach": true, 00:25:44.136 "method": "bdev_nvme_start_discovery", 00:25:44.136 "req_id": 1 00:25:44.136 } 00:25:44.136 Got JSON-RPC error response 00:25:44.136 response: 00:25:44.136 { 00:25:44.136 "code": -17, 00:25:44.136 "message": "File exists" 00:25:44.136 } 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:44.136 15:38:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.136 request: 00:25:44.136 { 00:25:44.136 "name": "nvme_second", 00:25:44.136 "trtype": "tcp", 00:25:44.136 "traddr": "10.0.0.2", 00:25:44.136 "adrfam": "ipv4", 00:25:44.136 "trsvcid": "8009", 00:25:44.136 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:44.136 "wait_for_attach": true, 00:25:44.136 "method": "bdev_nvme_start_discovery", 00:25:44.136 "req_id": 1 00:25:44.136 } 00:25:44.136 Got JSON-RPC error response 00:25:44.136 response: 00:25:44.136 { 00:25:44.136 "code": -17, 00:25:44.136 "message": "File exists" 00:25:44.136 } 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:44.136 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.396 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:44.396 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:44.396 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:44.396 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:44.396 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:44.396 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:44.396 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:44.396 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:44.396 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:44.396 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.396 15:38:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.337 [2024-11-06 15:38:03.136481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.337 [2024-11-06 15:38:03.136504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de4900 with addr=10.0.0.2, port=8010 00:25:45.337 [2024-11-06 15:38:03.136514] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:45.337 [2024-11-06 15:38:03.136519] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:45.337 [2024-11-06 15:38:03.136525] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:46.279 [2024-11-06 15:38:04.138809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.279 [2024-11-06 15:38:04.138828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de4900 with addr=10.0.0.2, port=8010 00:25:46.279 [2024-11-06 15:38:04.138836] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:46.279 [2024-11-06 15:38:04.138841] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:46.279 [2024-11-06 15:38:04.138846] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:47.220 [2024-11-06 15:38:05.140827] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:47.220 request: 00:25:47.220 { 00:25:47.220 "name": "nvme_second", 00:25:47.220 "trtype": "tcp", 00:25:47.220 "traddr": "10.0.0.2", 00:25:47.220 "adrfam": "ipv4", 00:25:47.220 "trsvcid": "8010", 00:25:47.220 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:47.220 "wait_for_attach": false, 00:25:47.220 "attach_timeout_ms": 3000, 00:25:47.220 "method": "bdev_nvme_start_discovery", 00:25:47.220 "req_id": 1 00:25:47.220 } 00:25:47.220 Got JSON-RPC error response 00:25:47.220 response: 00:25:47.220 { 00:25:47.220 "code": -110, 00:25:47.220 "message": "Connection timed out" 00:25:47.220 } 00:25:47.220 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:47.220 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:47.220 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:47.220 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:47.220 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:47.220 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:47.220 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:47.220 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:47.220 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.220 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:47.220 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.220 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:47.220 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.220 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:47.220 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:47.220 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3900126 00:25:47.220 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:47.220 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:47.220 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:47.481 rmmod nvme_tcp 00:25:47.481 rmmod nvme_fabrics 00:25:47.481 rmmod nvme_keyring 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3899949 ']' 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3899949 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 3899949 ']' 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 3899949 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3899949 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3899949' 00:25:47.481 killing process with pid 3899949 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 3899949 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 3899949 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.481 15:38:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.024 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:50.024 00:25:50.024 real 0m20.627s 00:25:50.024 user 0m23.903s 00:25:50.024 sys 0m7.358s 00:25:50.024 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:50.024 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.024 ************************************ 00:25:50.024 END TEST nvmf_host_discovery 00:25:50.024 ************************************ 00:25:50.024 15:38:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:50.024 15:38:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:50.024 15:38:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:50.024 15:38:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.024 ************************************ 00:25:50.024 START TEST nvmf_host_multipath_status 00:25:50.024 ************************************ 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:50.025 * Looking for test storage... 00:25:50.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:50.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.025 --rc genhtml_branch_coverage=1 00:25:50.025 --rc genhtml_function_coverage=1 00:25:50.025 --rc genhtml_legend=1 00:25:50.025 --rc geninfo_all_blocks=1 00:25:50.025 --rc geninfo_unexecuted_blocks=1 00:25:50.025 00:25:50.025 ' 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:50.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.025 --rc genhtml_branch_coverage=1 00:25:50.025 --rc genhtml_function_coverage=1 00:25:50.025 --rc genhtml_legend=1 00:25:50.025 --rc geninfo_all_blocks=1 00:25:50.025 --rc geninfo_unexecuted_blocks=1 00:25:50.025 00:25:50.025 ' 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:50.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.025 --rc genhtml_branch_coverage=1 00:25:50.025 --rc genhtml_function_coverage=1 00:25:50.025 --rc genhtml_legend=1 00:25:50.025 --rc geninfo_all_blocks=1 00:25:50.025 --rc geninfo_unexecuted_blocks=1 00:25:50.025 00:25:50.025 ' 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:50.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.025 --rc genhtml_branch_coverage=1 00:25:50.025 --rc genhtml_function_coverage=1 00:25:50.025 --rc genhtml_legend=1 00:25:50.025 --rc geninfo_all_blocks=1 00:25:50.025 --rc geninfo_unexecuted_blocks=1 00:25:50.025 00:25:50.025 ' 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:50.025 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:50.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:50.026 15:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:58.165 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:58.165 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.165 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:58.166 Found net devices under 0000:31:00.0: cvl_0_0 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:58.166 Found net devices under 0000:31:00.1: cvl_0_1 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:58.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:25:58.166 00:25:58.166 --- 10.0.0.2 ping statistics --- 00:25:58.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.166 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:25:58.166 00:25:58.166 --- 10.0.0.1 ping statistics --- 00:25:58.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.166 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3906334 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3906334 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3906334 ']' 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:58.166 15:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:58.166 [2024-11-06 15:38:15.553112] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:25:58.166 [2024-11-06 15:38:15.553180] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.166 [2024-11-06 15:38:15.654466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:58.166 [2024-11-06 15:38:15.706341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.166 [2024-11-06 15:38:15.706388] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.166 [2024-11-06 15:38:15.706397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.166 [2024-11-06 15:38:15.706405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.166 [2024-11-06 15:38:15.706411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.166 [2024-11-06 15:38:15.708147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.166 [2024-11-06 15:38:15.708151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.430 15:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:58.430 15:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:25:58.430 15:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:58.430 15:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:58.430 15:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:58.430 15:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.430 15:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3906334 00:25:58.430 15:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:58.691 [2024-11-06 15:38:16.577331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.691 15:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:58.952 Malloc0 00:25:58.952 15:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:59.213 15:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:59.473 15:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:59.473 [2024-11-06 15:38:17.394310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.473 15:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:59.734 [2024-11-06 15:38:17.586816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:59.734 15:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:59.734 15:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3906701 00:25:59.734 15:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:59.734 15:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3906701 /var/tmp/bdevperf.sock 00:25:59.734 15:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3906701 ']' 00:25:59.734 15:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:59.734 15:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:59.734 15:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:59.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:59.734 15:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:59.734 15:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:00.676 15:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:00.676 15:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:26:00.677 15:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:00.936 15:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:01.197 Nvme0n1 00:26:01.197 15:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:01.456 Nvme0n1 00:26:01.456 15:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:01.456 15:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:03.367 15:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:03.367 15:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:03.627 15:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:03.888 15:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:04.829 15:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:04.829 15:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:04.829 15:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.829 15:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:05.090 15:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.090 15:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:05.090 15:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.090 15:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:05.350 15:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.350 15:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:05.350 15:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.350 15:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:05.350 15:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.350 15:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:05.350 15:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.350 15:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:05.610 15:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.610 15:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:05.610 15:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.610 15:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:05.870 15:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.870 15:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:05.870 15:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.870 15:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:05.870 15:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.870 15:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:05.870 15:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:06.130 15:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:06.390 15:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:07.332 15:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:07.332 15:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:07.332 15:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.332 15:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:07.593 15:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.593 15:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:07.593 15:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.593 15:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:07.593 15:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.593 15:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:07.593 15:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.593 15:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:07.854 15:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.854 15:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:07.854 15:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.854 15:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:08.114 15:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.114 15:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:08.114 15:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.114 15:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:08.374 15:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.374 15:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:08.374 15:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.374 15:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:08.374 15:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.374 15:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:08.374 15:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:08.635 15:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:08.896 15:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:09.838 15:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:09.838 15:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:09.838 15:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.838 15:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.098 15:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.098 15:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:10.098 15:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.098 15:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:10.098 15:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.098 15:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:10.098 15:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.098 15:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:10.359 15:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.359 15:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:10.359 15:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.359 15:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:10.619 15:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.619 15:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:10.619 15:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.619 15:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:10.880 15:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.880 15:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:10.880 15:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.880 15:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:10.880 15:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.880 15:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:10.880 15:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:11.141 15:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:11.141 15:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:12.523 15:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:12.523 15:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:12.523 15:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.523 15:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:12.523 15:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.523 15:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:12.523 15:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.523 15:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:12.523 15:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:12.523 15:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:12.523 15:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.524 15:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:12.784 15:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.784 15:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:12.784 15:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.784 15:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:13.045 15:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.045 15:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:13.045 15:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.045 15:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:13.306 15:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.306 15:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:13.306 15:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.306 15:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:13.306 15:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:13.306 15:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:13.306 15:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:13.566 15:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:13.826 15:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:14.770 15:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:14.770 15:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:14.770 15:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.770 15:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:15.030 15:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.030 15:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:15.030 15:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.030 15:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:15.030 15:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.030 15:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:15.030 15:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:15.030 15:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.290 15:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.290 15:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:15.291 15:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.291 15:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:15.571 15:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.571 15:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:15.571 15:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.571 15:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:15.874 15:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.875 15:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:15.875 15:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.875 15:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:15.875 15:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.875 15:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:15.875 15:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:16.175 15:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:16.175 15:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:17.117 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:17.117 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:17.117 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.117 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:17.379 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.379 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:17.379 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.379 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:17.640 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.640 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:17.640 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.640 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:17.640 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.640 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:17.901 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.901 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:17.901 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.901 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:17.901 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.901 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:18.161 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:18.162 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:18.162 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.162 15:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:18.422 15:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.422 15:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:18.422 15:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:18.422 15:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:18.682 15:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:18.942 15:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:19.883 15:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:19.883 15:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:19.883 15:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.883 15:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:20.143 15:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.143 15:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:20.143 15:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.143 15:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:20.143 15:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.143 15:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:20.143 15:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:20.143 15:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.404 15:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.404 15:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:20.404 15:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.404 15:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:20.665 15:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.665 15:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:20.665 15:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.665 15:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:20.665 15:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.665 15:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:20.665 15:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.665 15:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:20.925 15:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.925 15:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:20.925 15:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:21.186 15:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:21.186 15:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:22.575 15:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:22.575 15:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:22.575 15:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.575 15:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:22.575 15:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:22.575 15:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:22.575 15:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.575 15:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:22.575 15:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.575 15:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:22.575 15:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.575 15:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:22.835 15:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.835 15:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:22.835 15:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:22.835 15:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.095 15:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.095 15:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:23.095 15:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.095 15:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:23.095 15:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.095 15:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:23.095 15:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.095 15:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:23.356 15:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.356 15:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:23.356 15:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:23.617 15:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:23.877 15:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:24.819 15:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:24.819 15:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:24.819 15:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.819 15:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:25.080 15:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.080 15:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:25.080 15:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.080 15:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:25.080 15:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.080 15:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:25.080 15:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.080 15:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:25.340 15:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.341 15:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:25.341 15:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.341 15:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:25.601 15:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.601 15:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:25.601 15:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.601 15:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:25.601 15:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.601 15:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:25.601 15:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.601 15:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:25.861 15:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.861 15:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:25.861 15:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:26.121 15:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:26.382 15:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:27.324 15:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:27.324 15:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:27.324 15:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.324 15:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:27.585 15:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.585 15:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:27.585 15:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.585 15:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:27.585 15:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.585 15:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:27.585 15:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.585 15:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:27.846 15:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.846 15:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:27.846 15:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.846 15:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:28.107 15:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.107 15:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:28.108 15:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.108 15:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:28.108 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.108 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:28.108 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.108 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:28.369 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.369 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3906701 00:26:28.369 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3906701 ']' 00:26:28.369 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3906701 00:26:28.369 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:28.369 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:28.369 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3906701 00:26:28.369 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:26:28.369 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:26:28.369 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3906701' 00:26:28.369 killing process with pid 3906701 00:26:28.369 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3906701 00:26:28.369 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3906701 00:26:28.369 { 00:26:28.369 "results": [ 00:26:28.369 { 00:26:28.369 "job": "Nvme0n1", 00:26:28.369 "core_mask": "0x4", 00:26:28.369 "workload": "verify", 00:26:28.369 "status": "terminated", 00:26:28.369 "verify_range": { 00:26:28.369 "start": 0, 00:26:28.369 "length": 16384 00:26:28.369 }, 00:26:28.369 "queue_depth": 128, 00:26:28.369 "io_size": 4096, 00:26:28.369 "runtime": 26.797377, 00:26:28.369 "iops": 11794.25135527257, 00:26:28.369 "mibps": 46.07129435653348, 00:26:28.369 "io_failed": 0, 00:26:28.369 "io_timeout": 0, 00:26:28.369 "avg_latency_us": 10833.612124472005, 00:26:28.369 "min_latency_us": 262.82666666666665, 00:26:28.369 "max_latency_us": 3019898.88 00:26:28.369 } 00:26:28.369 ], 00:26:28.369 "core_count": 1 00:26:28.369 } 00:26:28.634 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3906701 00:26:28.634 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:28.634 [2024-11-06 15:38:17.675059] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:26:28.634 [2024-11-06 15:38:17.675152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3906701 ] 00:26:28.634 [2024-11-06 15:38:17.770224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.634 [2024-11-06 15:38:17.821482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:28.634 Running I/O for 90 seconds... 00:26:28.634 10193.00 IOPS, 39.82 MiB/s [2024-11-06T14:38:46.617Z] 10667.00 IOPS, 41.67 MiB/s [2024-11-06T14:38:46.617Z] 10891.67 IOPS, 42.55 MiB/s [2024-11-06T14:38:46.617Z] 10992.25 IOPS, 42.94 MiB/s [2024-11-06T14:38:46.617Z] 11257.40 IOPS, 43.97 MiB/s [2024-11-06T14:38:46.617Z] 11490.17 IOPS, 44.88 MiB/s [2024-11-06T14:38:46.617Z] 11680.86 IOPS, 45.63 MiB/s [2024-11-06T14:38:46.617Z] 11822.50 IOPS, 46.18 MiB/s [2024-11-06T14:38:46.617Z] 11936.56 IOPS, 46.63 MiB/s [2024-11-06T14:38:46.617Z] 12019.60 IOPS, 46.95 MiB/s [2024-11-06T14:38:46.617Z] 12080.27 IOPS, 47.19 MiB/s [2024-11-06T14:38:46.617Z] [2024-11-06 15:38:31.407951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.407984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:115000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.634 [2024-11-06 15:38:31.408589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.634 [2024-11-06 15:38:31.408604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.634 [2024-11-06 15:38:31.408621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.634 [2024-11-06 15:38:31.408637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:114552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.634 [2024-11-06 15:38:31.408653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.634 [2024-11-06 15:38:31.408668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.634 [2024-11-06 15:38:31.408685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:115048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.634 [2024-11-06 15:38:31.408749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.634 [2024-11-06 15:38:31.408754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.408765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:115064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.408770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.408781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.408786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.408867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.408874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.408888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:115088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.408893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.408905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:115096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.408911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.408924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.635 [2024-11-06 15:38:31.408929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.408942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.635 [2024-11-06 15:38:31.408947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.408960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.635 [2024-11-06 15:38:31.408965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.408977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.635 [2024-11-06 15:38:31.408983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.408998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.635 [2024-11-06 15:38:31.409003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.635 [2024-11-06 15:38:31.409021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.635 [2024-11-06 15:38:31.409039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.635 [2024-11-06 15:38:31.409056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.635 [2024-11-06 15:38:31.409074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.635 [2024-11-06 15:38:31.409092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.635 [2024-11-06 15:38:31.409110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.635 [2024-11-06 15:38:31.409128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.635 [2024-11-06 15:38:31.409146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.635 [2024-11-06 15:38:31.409164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:114688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.635 [2024-11-06 15:38:31.409182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.409200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:115112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.409221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.409239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:115128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.409257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.409274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.409620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:115152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.409640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.409659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.409678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.409698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.409717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.409736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.409758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.409779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.409798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.409817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.409836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.635 [2024-11-06 15:38:31.409850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.635 [2024-11-06 15:38:31.409855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.409869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.409874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.409888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.409893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.409907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:115264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.409912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.409926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:115272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.409931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.409976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:115280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.409982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.409997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:115296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:115320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:115360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:115368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:115376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:115384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:115408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:115416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:115432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:115440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:115448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:115472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:115480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.636 [2024-11-06 15:38:31.410482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.636 [2024-11-06 15:38:31.410502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.636 [2024-11-06 15:38:31.410521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:114712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.636 [2024-11-06 15:38:31.410543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.636 [2024-11-06 15:38:31.410563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.636 [2024-11-06 15:38:31.410583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.636 [2024-11-06 15:38:31.410602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:114744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.636 [2024-11-06 15:38:31.410622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.636 [2024-11-06 15:38:31.410642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.636 [2024-11-06 15:38:31.410656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.636 [2024-11-06 15:38:31.410661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:31.410676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.637 [2024-11-06 15:38:31.410681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:31.410696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.637 [2024-11-06 15:38:31.410701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:31.410715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:114784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.637 [2024-11-06 15:38:31.410720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:31.410735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.637 [2024-11-06 15:38:31.410740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:31.410759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.637 [2024-11-06 15:38:31.410764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:31.410779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.637 [2024-11-06 15:38:31.410785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:31.410799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.637 [2024-11-06 15:38:31.410805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.637 12115.58 IOPS, 47.33 MiB/s [2024-11-06T14:38:46.620Z] 11183.62 IOPS, 43.69 MiB/s [2024-11-06T14:38:46.620Z] 10384.79 IOPS, 40.57 MiB/s [2024-11-06T14:38:46.620Z] 9727.53 IOPS, 38.00 MiB/s [2024-11-06T14:38:46.620Z] 9921.69 IOPS, 38.76 MiB/s [2024-11-06T14:38:46.620Z] 10084.18 IOPS, 39.39 MiB/s [2024-11-06T14:38:46.620Z] 10434.28 IOPS, 40.76 MiB/s [2024-11-06T14:38:46.620Z] 10772.42 IOPS, 42.08 MiB/s [2024-11-06T14:38:46.620Z] 10985.90 IOPS, 42.91 MiB/s [2024-11-06T14:38:46.620Z] 11067.05 IOPS, 43.23 MiB/s [2024-11-06T14:38:46.620Z] 11147.32 IOPS, 43.54 MiB/s [2024-11-06T14:38:46.620Z] 11358.52 IOPS, 44.37 MiB/s [2024-11-06T14:38:46.620Z] 11584.38 IOPS, 45.25 MiB/s [2024-11-06T14:38:46.620Z] [2024-11-06 15:38:44.091686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.637 [2024-11-06 15:38:44.091723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.091756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.637 [2024-11-06 15:38:44.091763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.091774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.091780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.091791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.091796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.091806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.091811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.091822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.091827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.091837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.091842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.091853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.091858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.091868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.091873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.091883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.091894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.091905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.091910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.091920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.091925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.091936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.091941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.091952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.091957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.091967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.091972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.091982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.091987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.091998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.092003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.092013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.092018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.092029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.637 [2024-11-06 15:38:44.092034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.092044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.637 [2024-11-06 15:38:44.092050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.092060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.092065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.092075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.092082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.092092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.092098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.092108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.092113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.092124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.092129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.092139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.092145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.092156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.092161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.092171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.092176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.092186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.092192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.637 [2024-11-06 15:38:44.092203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.637 [2024-11-06 15:38:44.092208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.638 [2024-11-06 15:38:44.093253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.638 [2024-11-06 15:38:44.093271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.638 [2024-11-06 15:38:44.093287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.638 [2024-11-06 15:38:44.093302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.638 [2024-11-06 15:38:44.093320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.638 [2024-11-06 15:38:44.093336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.638 [2024-11-06 15:38:44.093352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.638 [2024-11-06 15:38:44.093367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.638 [2024-11-06 15:38:44.093383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.638 [2024-11-06 15:38:44.093473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.638 [2024-11-06 15:38:44.093490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.638 [2024-11-06 15:38:44.093505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.638 [2024-11-06 15:38:44.093522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.638 [2024-11-06 15:38:44.093537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.638 [2024-11-06 15:38:44.093553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.638 [2024-11-06 15:38:44.093569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.638 [2024-11-06 15:38:44.093588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.638 [2024-11-06 15:38:44.093794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.638 [2024-11-06 15:38:44.093811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.638 [2024-11-06 15:38:44.093827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.638 [2024-11-06 15:38:44.093842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.638 [2024-11-06 15:38:44.093857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.638 [2024-11-06 15:38:44.093873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.638 [2024-11-06 15:38:44.093889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.638 [2024-11-06 15:38:44.093904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.638 [2024-11-06 15:38:44.093920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.638 [2024-11-06 15:38:44.093936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.638 [2024-11-06 15:38:44.093951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.638 [2024-11-06 15:38:44.093968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.638 [2024-11-06 15:38:44.093984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.093994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.638 [2024-11-06 15:38:44.093999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.094009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.638 [2024-11-06 15:38:44.094014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.638 [2024-11-06 15:38:44.094025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.639 [2024-11-06 15:38:44.094030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.639 [2024-11-06 15:38:44.094041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.639 [2024-11-06 15:38:44.094046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.639 11728.84 IOPS, 45.82 MiB/s [2024-11-06T14:38:46.622Z] 11772.38 IOPS, 45.99 MiB/s [2024-11-06T14:38:46.622Z] Received shutdown signal, test time was about 26.797995 seconds 00:26:28.639 00:26:28.639 Latency(us) 00:26:28.639 [2024-11-06T14:38:46.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.639 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:28.639 Verification LBA range: start 0x0 length 0x4000 00:26:28.639 Nvme0n1 : 26.80 11794.25 46.07 0.00 0.00 10833.61 262.83 3019898.88 00:26:28.639 [2024-11-06T14:38:46.622Z] =================================================================================================================== 00:26:28.639 [2024-11-06T14:38:46.622Z] Total : 11794.25 46.07 0.00 0.00 10833.61 262.83 3019898.88 00:26:28.639 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:28.639 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:28.639 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:28.639 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:28.639 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:28.639 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:28.639 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:28.639 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:28.639 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:28.639 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:28.639 rmmod nvme_tcp 00:26:28.639 rmmod nvme_fabrics 00:26:28.639 rmmod nvme_keyring 00:26:28.639 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:28.639 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:28.639 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:28.639 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3906334 ']' 00:26:28.639 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3906334 00:26:28.639 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3906334 ']' 00:26:28.639 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3906334 00:26:28.639 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:28.899 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:28.899 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3906334 00:26:28.899 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:28.899 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:28.899 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3906334' 00:26:28.899 killing process with pid 3906334 00:26:28.899 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3906334 00:26:28.899 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3906334 00:26:28.899 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:28.899 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:28.899 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:28.899 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:28.899 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:28.899 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:28.899 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:28.899 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:28.899 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:28.899 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.899 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:28.899 15:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.444 15:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:31.444 00:26:31.444 real 0m41.272s 00:26:31.444 user 1m46.131s 00:26:31.444 sys 0m11.878s 00:26:31.444 15:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:31.444 15:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:31.444 ************************************ 00:26:31.444 END TEST nvmf_host_multipath_status 00:26:31.444 ************************************ 00:26:31.444 15:38:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:31.444 15:38:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:31.444 15:38:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:31.444 15:38:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.444 ************************************ 00:26:31.444 START TEST nvmf_discovery_remove_ifc 00:26:31.444 ************************************ 00:26:31.444 15:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:31.444 * Looking for test storage... 00:26:31.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:31.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.444 --rc genhtml_branch_coverage=1 00:26:31.444 --rc genhtml_function_coverage=1 00:26:31.444 --rc genhtml_legend=1 00:26:31.444 --rc geninfo_all_blocks=1 00:26:31.444 --rc geninfo_unexecuted_blocks=1 00:26:31.444 00:26:31.444 ' 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:31.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.444 --rc genhtml_branch_coverage=1 00:26:31.444 --rc genhtml_function_coverage=1 00:26:31.444 --rc genhtml_legend=1 00:26:31.444 --rc geninfo_all_blocks=1 00:26:31.444 --rc geninfo_unexecuted_blocks=1 00:26:31.444 00:26:31.444 ' 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:31.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.444 --rc genhtml_branch_coverage=1 00:26:31.444 --rc genhtml_function_coverage=1 00:26:31.444 --rc genhtml_legend=1 00:26:31.444 --rc geninfo_all_blocks=1 00:26:31.444 --rc geninfo_unexecuted_blocks=1 00:26:31.444 00:26:31.444 ' 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:31.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.444 --rc genhtml_branch_coverage=1 00:26:31.444 --rc genhtml_function_coverage=1 00:26:31.444 --rc genhtml_legend=1 00:26:31.444 --rc geninfo_all_blocks=1 00:26:31.444 --rc geninfo_unexecuted_blocks=1 00:26:31.444 00:26:31.444 ' 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:31.444 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:31.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:31.445 15:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:39.584 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:39.584 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:39.584 Found net devices under 0000:31:00.0: cvl_0_0 00:26:39.584 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:39.585 Found net devices under 0000:31:00.1: cvl_0_1 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:39.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:26:39.585 00:26:39.585 --- 10.0.0.2 ping statistics --- 00:26:39.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.585 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:39.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:26:39.585 00:26:39.585 --- 10.0.0.1 ping statistics --- 00:26:39.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.585 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3916663 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3916663 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 3916663 ']' 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:39.585 15:38:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.585 [2024-11-06 15:38:56.797705] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:26:39.585 [2024-11-06 15:38:56.797788] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.585 [2024-11-06 15:38:56.881349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.585 [2024-11-06 15:38:56.932602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.585 [2024-11-06 15:38:56.932653] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.585 [2024-11-06 15:38:56.932662] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.585 [2024-11-06 15:38:56.932669] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.585 [2024-11-06 15:38:56.932675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.585 [2024-11-06 15:38:56.933452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.846 15:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:39.846 15:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:26:39.846 15:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:39.846 15:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:39.846 15:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.846 15:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:39.846 15:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:39.846 15:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.846 15:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.846 [2024-11-06 15:38:57.673703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.846 [2024-11-06 15:38:57.681932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:39.846 null0 00:26:39.846 [2024-11-06 15:38:57.713907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.846 15:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.846 15:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3916970 00:26:39.846 15:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3916970 /tmp/host.sock 00:26:39.846 15:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:39.846 15:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 3916970 ']' 00:26:39.846 15:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:26:39.846 15:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:39.846 15:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:39.846 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:39.846 15:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:39.846 15:38:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.846 [2024-11-06 15:38:57.789216] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:26:39.846 [2024-11-06 15:38:57.789277] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916970 ] 00:26:40.107 [2024-11-06 15:38:57.883530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.107 [2024-11-06 15:38:57.936392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.677 15:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:40.677 15:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:26:40.677 15:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:40.677 15:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:40.677 15:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.677 15:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.677 15:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.677 15:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:40.677 15:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.678 15:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.938 15:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.938 15:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:40.938 15:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.938 15:38:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.886 [2024-11-06 15:38:59.733947] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:41.886 [2024-11-06 15:38:59.733978] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:41.886 [2024-11-06 15:38:59.733993] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:41.886 [2024-11-06 15:38:59.821262] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:42.148 [2024-11-06 15:38:59.922315] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:42.148 [2024-11-06 15:38:59.923505] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x5a2550:1 started. 00:26:42.148 [2024-11-06 15:38:59.925324] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:42.148 [2024-11-06 15:38:59.925404] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:42.148 [2024-11-06 15:38:59.925429] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:42.148 [2024-11-06 15:38:59.925447] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:42.148 [2024-11-06 15:38:59.925472] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:42.148 15:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.148 15:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:42.148 15:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.148 [2024-11-06 15:38:59.931686] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x5a2550 was disconnected and freed. delete nvme_qpair. 00:26:42.148 15:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.148 15:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.148 15:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.148 15:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.148 15:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.148 15:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.148 15:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.148 15:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:42.148 15:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:42.148 15:38:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:42.148 15:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:42.148 15:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.148 15:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.148 15:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.148 15:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.148 15:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.148 15:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.148 15:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.408 15:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.408 15:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:42.408 15:39:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:43.350 15:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.350 15:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.350 15:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.350 15:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.350 15:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.350 15:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.350 15:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.350 15:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.350 15:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:43.350 15:39:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:44.291 15:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:44.291 15:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.291 15:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:44.291 15:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.291 15:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:44.291 15:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.291 15:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:44.291 15:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.552 15:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:44.552 15:39:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:45.495 15:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:45.495 15:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.495 15:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:45.495 15:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.495 15:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:45.495 15:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:45.495 15:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:45.495 15:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.495 15:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:45.495 15:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:46.437 15:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:46.437 15:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.437 15:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:46.437 15:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.437 15:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:46.437 15:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:46.437 15:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:46.437 15:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.437 15:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:46.437 15:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:47.819 [2024-11-06 15:39:05.365528] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:47.819 [2024-11-06 15:39:05.365570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.819 [2024-11-06 15:39:05.365580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.819 [2024-11-06 15:39:05.365587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.819 [2024-11-06 15:39:05.365593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.819 [2024-11-06 15:39:05.365599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.819 [2024-11-06 15:39:05.365604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.819 [2024-11-06 15:39:05.365610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.819 [2024-11-06 15:39:05.365615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.819 [2024-11-06 15:39:05.365620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.819 [2024-11-06 15:39:05.365625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.819 [2024-11-06 15:39:05.365631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57eec0 is same with the state(6) to be set 00:26:47.819 [2024-11-06 15:39:05.375549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57eec0 (9): Bad file descriptor 00:26:47.819 [2024-11-06 15:39:05.385586] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:47.819 [2024-11-06 15:39:05.385595] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:47.819 [2024-11-06 15:39:05.385598] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:47.819 [2024-11-06 15:39:05.385602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:47.819 [2024-11-06 15:39:05.385620] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:47.819 15:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:47.819 15:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.819 15:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:47.819 15:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.819 15:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:47.819 15:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.819 15:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:48.760 [2024-11-06 15:39:06.397852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:48.760 [2024-11-06 15:39:06.397954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x57eec0 with addr=10.0.0.2, port=4420 00:26:48.760 [2024-11-06 15:39:06.397988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57eec0 is same with the state(6) to be set 00:26:48.760 [2024-11-06 15:39:06.398053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57eec0 (9): Bad file descriptor 00:26:48.760 [2024-11-06 15:39:06.399186] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:48.760 [2024-11-06 15:39:06.399260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:48.760 [2024-11-06 15:39:06.399282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:48.760 [2024-11-06 15:39:06.399306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:48.760 [2024-11-06 15:39:06.399327] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:48.760 [2024-11-06 15:39:06.399343] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:48.760 [2024-11-06 15:39:06.399357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:48.760 [2024-11-06 15:39:06.399380] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:48.760 [2024-11-06 15:39:06.399394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:48.760 15:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.760 15:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:48.760 15:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:49.704 [2024-11-06 15:39:07.401817] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:49.704 [2024-11-06 15:39:07.401834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:49.704 [2024-11-06 15:39:07.401845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:49.704 [2024-11-06 15:39:07.401850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:49.704 [2024-11-06 15:39:07.401856] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:49.704 [2024-11-06 15:39:07.401862] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:49.704 [2024-11-06 15:39:07.401865] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:49.704 [2024-11-06 15:39:07.401869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:49.704 [2024-11-06 15:39:07.401890] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:49.704 [2024-11-06 15:39:07.401911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.704 [2024-11-06 15:39:07.401919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.704 [2024-11-06 15:39:07.401928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.704 [2024-11-06 15:39:07.401934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.704 [2024-11-06 15:39:07.401943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.704 [2024-11-06 15:39:07.401948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.704 [2024-11-06 15:39:07.401954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.704 [2024-11-06 15:39:07.401959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.704 [2024-11-06 15:39:07.401965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.704 [2024-11-06 15:39:07.401970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.704 [2024-11-06 15:39:07.401975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:49.704 [2024-11-06 15:39:07.402387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x56e600 (9): Bad file descriptor 00:26:49.704 [2024-11-06 15:39:07.403397] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:49.704 [2024-11-06 15:39:07.403406] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:49.704 15:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:51.090 15:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:51.090 15:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.090 15:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:51.090 15:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.090 15:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:51.090 15:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:51.090 15:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:51.090 15:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.090 15:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:51.090 15:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:51.661 [2024-11-06 15:39:09.458869] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:51.661 [2024-11-06 15:39:09.458882] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:51.661 [2024-11-06 15:39:09.458892] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:51.661 [2024-11-06 15:39:09.548152] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:51.661 [2024-11-06 15:39:09.604853] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:51.661 [2024-11-06 15:39:09.605440] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x589510:1 started. 00:26:51.661 [2024-11-06 15:39:09.606350] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:51.661 [2024-11-06 15:39:09.606378] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:51.661 [2024-11-06 15:39:09.606394] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:51.661 [2024-11-06 15:39:09.606406] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:51.661 [2024-11-06 15:39:09.606412] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:51.661 [2024-11-06 15:39:09.614756] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x589510 was disconnected and freed. delete nvme_qpair. 00:26:51.922 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:51.922 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.922 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:51.922 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.922 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:51.922 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:51.922 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:51.922 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.922 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:51.922 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:51.922 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3916970 00:26:51.922 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 3916970 ']' 00:26:51.922 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 3916970 00:26:51.922 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:26:51.922 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:51.922 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3916970 00:26:51.922 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:51.922 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:51.922 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3916970' 00:26:51.922 killing process with pid 3916970 00:26:51.922 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 3916970 00:26:51.922 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 3916970 00:26:52.182 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:52.182 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:52.182 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:52.182 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:52.182 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:52.182 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:52.182 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:52.182 rmmod nvme_tcp 00:26:52.182 rmmod nvme_fabrics 00:26:52.182 rmmod nvme_keyring 00:26:52.182 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:52.182 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:52.182 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:52.182 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3916663 ']' 00:26:52.182 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3916663 00:26:52.182 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 3916663 ']' 00:26:52.182 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 3916663 00:26:52.182 15:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:26:52.182 15:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:52.182 15:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3916663 00:26:52.183 15:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:52.183 15:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:52.183 15:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3916663' 00:26:52.183 killing process with pid 3916663 00:26:52.183 15:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 3916663 00:26:52.183 15:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 3916663 00:26:52.444 15:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:52.444 15:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:52.444 15:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:52.444 15:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:52.444 15:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:52.444 15:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:52.444 15:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:52.444 15:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:52.444 15:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:52.444 15:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.444 15:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:52.444 15:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.356 15:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:54.356 00:26:54.356 real 0m23.301s 00:26:54.356 user 0m27.169s 00:26:54.356 sys 0m7.093s 00:26:54.356 15:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:54.356 15:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.356 ************************************ 00:26:54.356 END TEST nvmf_discovery_remove_ifc 00:26:54.356 ************************************ 00:26:54.356 15:39:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:54.356 15:39:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:54.356 15:39:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:54.356 15:39:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.356 ************************************ 00:26:54.356 START TEST nvmf_identify_kernel_target 00:26:54.356 ************************************ 00:26:54.356 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:54.618 * Looking for test storage... 00:26:54.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:54.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.618 --rc genhtml_branch_coverage=1 00:26:54.618 --rc genhtml_function_coverage=1 00:26:54.618 --rc genhtml_legend=1 00:26:54.618 --rc geninfo_all_blocks=1 00:26:54.618 --rc geninfo_unexecuted_blocks=1 00:26:54.618 00:26:54.618 ' 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:54.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.618 --rc genhtml_branch_coverage=1 00:26:54.618 --rc genhtml_function_coverage=1 00:26:54.618 --rc genhtml_legend=1 00:26:54.618 --rc geninfo_all_blocks=1 00:26:54.618 --rc geninfo_unexecuted_blocks=1 00:26:54.618 00:26:54.618 ' 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:54.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.618 --rc genhtml_branch_coverage=1 00:26:54.618 --rc genhtml_function_coverage=1 00:26:54.618 --rc genhtml_legend=1 00:26:54.618 --rc geninfo_all_blocks=1 00:26:54.618 --rc geninfo_unexecuted_blocks=1 00:26:54.618 00:26:54.618 ' 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:54.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.618 --rc genhtml_branch_coverage=1 00:26:54.618 --rc genhtml_function_coverage=1 00:26:54.618 --rc genhtml_legend=1 00:26:54.618 --rc geninfo_all_blocks=1 00:26:54.618 --rc geninfo_unexecuted_blocks=1 00:26:54.618 00:26:54.618 ' 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:54.618 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:54.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:54.619 15:39:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:02.764 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:02.764 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:02.764 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:02.765 Found net devices under 0000:31:00.0: cvl_0_0 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:02.765 Found net devices under 0000:31:00.1: cvl_0_1 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:02.765 15:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:02.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:02.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:27:02.765 00:27:02.765 --- 10.0.0.2 ping statistics --- 00:27:02.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.765 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:02.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:02.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:27:02.765 00:27:02.765 --- 10.0.0.1 ping statistics --- 00:27:02.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.765 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:02.765 15:39:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:06.070 Waiting for block devices as requested 00:27:06.070 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:06.070 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:06.070 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:06.070 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:06.331 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:06.331 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:06.331 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:06.592 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:06.592 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:06.853 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:06.853 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:06.853 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:07.114 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:07.114 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:07.114 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:07.375 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:07.375 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:07.637 No valid GPT data, bailing 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:07.637 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:07.898 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:27:07.898 00:27:07.898 Discovery Log Number of Records 2, Generation counter 2 00:27:07.898 =====Discovery Log Entry 0====== 00:27:07.898 trtype: tcp 00:27:07.898 adrfam: ipv4 00:27:07.898 subtype: current discovery subsystem 00:27:07.898 treq: not specified, sq flow control disable supported 00:27:07.898 portid: 1 00:27:07.898 trsvcid: 4420 00:27:07.898 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:07.898 traddr: 10.0.0.1 00:27:07.898 eflags: none 00:27:07.898 sectype: none 00:27:07.898 =====Discovery Log Entry 1====== 00:27:07.898 trtype: tcp 00:27:07.898 adrfam: ipv4 00:27:07.898 subtype: nvme subsystem 00:27:07.898 treq: not specified, sq flow control disable supported 00:27:07.898 portid: 1 00:27:07.898 trsvcid: 4420 00:27:07.898 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:07.898 traddr: 10.0.0.1 00:27:07.898 eflags: none 00:27:07.898 sectype: none 00:27:07.898 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:07.898 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:07.898 ===================================================== 00:27:07.898 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:07.898 ===================================================== 00:27:07.898 Controller Capabilities/Features 00:27:07.898 ================================ 00:27:07.898 Vendor ID: 0000 00:27:07.898 Subsystem Vendor ID: 0000 00:27:07.898 Serial Number: 013fc3be3b42bdb2b502 00:27:07.898 Model Number: Linux 00:27:07.898 Firmware Version: 6.8.9-20 00:27:07.898 Recommended Arb Burst: 0 00:27:07.898 IEEE OUI Identifier: 00 00 00 00:27:07.898 Multi-path I/O 00:27:07.898 May have multiple subsystem ports: No 00:27:07.898 May have multiple controllers: No 00:27:07.898 Associated with SR-IOV VF: No 00:27:07.898 Max Data Transfer Size: Unlimited 00:27:07.898 Max Number of Namespaces: 0 00:27:07.898 Max Number of I/O Queues: 1024 00:27:07.898 NVMe Specification Version (VS): 1.3 00:27:07.898 NVMe Specification Version (Identify): 1.3 00:27:07.898 Maximum Queue Entries: 1024 00:27:07.898 Contiguous Queues Required: No 00:27:07.898 Arbitration Mechanisms Supported 00:27:07.898 Weighted Round Robin: Not Supported 00:27:07.898 Vendor Specific: Not Supported 00:27:07.898 Reset Timeout: 7500 ms 00:27:07.898 Doorbell Stride: 4 bytes 00:27:07.898 NVM Subsystem Reset: Not Supported 00:27:07.898 Command Sets Supported 00:27:07.898 NVM Command Set: Supported 00:27:07.898 Boot Partition: Not Supported 00:27:07.898 Memory Page Size Minimum: 4096 bytes 00:27:07.898 Memory Page Size Maximum: 4096 bytes 00:27:07.898 Persistent Memory Region: Not Supported 00:27:07.898 Optional Asynchronous Events Supported 00:27:07.898 Namespace Attribute Notices: Not Supported 00:27:07.898 Firmware Activation Notices: Not Supported 00:27:07.899 ANA Change Notices: Not Supported 00:27:07.899 PLE Aggregate Log Change Notices: Not Supported 00:27:07.899 LBA Status Info Alert Notices: Not Supported 00:27:07.899 EGE Aggregate Log Change Notices: Not Supported 00:27:07.899 Normal NVM Subsystem Shutdown event: Not Supported 00:27:07.899 Zone Descriptor Change Notices: Not Supported 00:27:07.899 Discovery Log Change Notices: Supported 00:27:07.899 Controller Attributes 00:27:07.899 128-bit Host Identifier: Not Supported 00:27:07.899 Non-Operational Permissive Mode: Not Supported 00:27:07.899 NVM Sets: Not Supported 00:27:07.899 Read Recovery Levels: Not Supported 00:27:07.899 Endurance Groups: Not Supported 00:27:07.899 Predictable Latency Mode: Not Supported 00:27:07.899 Traffic Based Keep ALive: Not Supported 00:27:07.899 Namespace Granularity: Not Supported 00:27:07.899 SQ Associations: Not Supported 00:27:07.899 UUID List: Not Supported 00:27:07.899 Multi-Domain Subsystem: Not Supported 00:27:07.899 Fixed Capacity Management: Not Supported 00:27:07.899 Variable Capacity Management: Not Supported 00:27:07.899 Delete Endurance Group: Not Supported 00:27:07.899 Delete NVM Set: Not Supported 00:27:07.899 Extended LBA Formats Supported: Not Supported 00:27:07.899 Flexible Data Placement Supported: Not Supported 00:27:07.899 00:27:07.899 Controller Memory Buffer Support 00:27:07.899 ================================ 00:27:07.899 Supported: No 00:27:07.899 00:27:07.899 Persistent Memory Region Support 00:27:07.899 ================================ 00:27:07.899 Supported: No 00:27:07.899 00:27:07.899 Admin Command Set Attributes 00:27:07.899 ============================ 00:27:07.899 Security Send/Receive: Not Supported 00:27:07.899 Format NVM: Not Supported 00:27:07.899 Firmware Activate/Download: Not Supported 00:27:07.899 Namespace Management: Not Supported 00:27:07.899 Device Self-Test: Not Supported 00:27:07.899 Directives: Not Supported 00:27:07.899 NVMe-MI: Not Supported 00:27:07.899 Virtualization Management: Not Supported 00:27:07.899 Doorbell Buffer Config: Not Supported 00:27:07.899 Get LBA Status Capability: Not Supported 00:27:07.899 Command & Feature Lockdown Capability: Not Supported 00:27:07.899 Abort Command Limit: 1 00:27:07.899 Async Event Request Limit: 1 00:27:07.899 Number of Firmware Slots: N/A 00:27:07.899 Firmware Slot 1 Read-Only: N/A 00:27:07.899 Firmware Activation Without Reset: N/A 00:27:07.899 Multiple Update Detection Support: N/A 00:27:07.899 Firmware Update Granularity: No Information Provided 00:27:07.899 Per-Namespace SMART Log: No 00:27:07.899 Asymmetric Namespace Access Log Page: Not Supported 00:27:07.899 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:07.899 Command Effects Log Page: Not Supported 00:27:07.899 Get Log Page Extended Data: Supported 00:27:07.899 Telemetry Log Pages: Not Supported 00:27:07.899 Persistent Event Log Pages: Not Supported 00:27:07.899 Supported Log Pages Log Page: May Support 00:27:07.899 Commands Supported & Effects Log Page: Not Supported 00:27:07.899 Feature Identifiers & Effects Log Page:May Support 00:27:07.899 NVMe-MI Commands & Effects Log Page: May Support 00:27:07.899 Data Area 4 for Telemetry Log: Not Supported 00:27:07.899 Error Log Page Entries Supported: 1 00:27:07.899 Keep Alive: Not Supported 00:27:07.899 00:27:07.899 NVM Command Set Attributes 00:27:07.899 ========================== 00:27:07.899 Submission Queue Entry Size 00:27:07.899 Max: 1 00:27:07.899 Min: 1 00:27:07.899 Completion Queue Entry Size 00:27:07.899 Max: 1 00:27:07.899 Min: 1 00:27:07.899 Number of Namespaces: 0 00:27:07.899 Compare Command: Not Supported 00:27:07.899 Write Uncorrectable Command: Not Supported 00:27:07.899 Dataset Management Command: Not Supported 00:27:07.899 Write Zeroes Command: Not Supported 00:27:07.899 Set Features Save Field: Not Supported 00:27:07.899 Reservations: Not Supported 00:27:07.899 Timestamp: Not Supported 00:27:07.899 Copy: Not Supported 00:27:07.899 Volatile Write Cache: Not Present 00:27:07.899 Atomic Write Unit (Normal): 1 00:27:07.899 Atomic Write Unit (PFail): 1 00:27:07.899 Atomic Compare & Write Unit: 1 00:27:07.899 Fused Compare & Write: Not Supported 00:27:07.899 Scatter-Gather List 00:27:07.899 SGL Command Set: Supported 00:27:07.899 SGL Keyed: Not Supported 00:27:07.899 SGL Bit Bucket Descriptor: Not Supported 00:27:07.899 SGL Metadata Pointer: Not Supported 00:27:07.899 Oversized SGL: Not Supported 00:27:07.899 SGL Metadata Address: Not Supported 00:27:07.899 SGL Offset: Supported 00:27:07.899 Transport SGL Data Block: Not Supported 00:27:07.899 Replay Protected Memory Block: Not Supported 00:27:07.899 00:27:07.899 Firmware Slot Information 00:27:07.899 ========================= 00:27:07.899 Active slot: 0 00:27:07.899 00:27:07.899 00:27:07.899 Error Log 00:27:07.899 ========= 00:27:07.899 00:27:07.899 Active Namespaces 00:27:07.899 ================= 00:27:07.899 Discovery Log Page 00:27:07.899 ================== 00:27:07.899 Generation Counter: 2 00:27:07.899 Number of Records: 2 00:27:07.899 Record Format: 0 00:27:07.899 00:27:07.899 Discovery Log Entry 0 00:27:07.899 ---------------------- 00:27:07.899 Transport Type: 3 (TCP) 00:27:07.899 Address Family: 1 (IPv4) 00:27:07.899 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:07.899 Entry Flags: 00:27:07.899 Duplicate Returned Information: 0 00:27:07.899 Explicit Persistent Connection Support for Discovery: 0 00:27:07.899 Transport Requirements: 00:27:07.899 Secure Channel: Not Specified 00:27:07.899 Port ID: 1 (0x0001) 00:27:07.899 Controller ID: 65535 (0xffff) 00:27:07.899 Admin Max SQ Size: 32 00:27:07.899 Transport Service Identifier: 4420 00:27:07.899 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:07.899 Transport Address: 10.0.0.1 00:27:07.899 Discovery Log Entry 1 00:27:07.899 ---------------------- 00:27:07.899 Transport Type: 3 (TCP) 00:27:07.899 Address Family: 1 (IPv4) 00:27:07.899 Subsystem Type: 2 (NVM Subsystem) 00:27:07.899 Entry Flags: 00:27:07.899 Duplicate Returned Information: 0 00:27:07.899 Explicit Persistent Connection Support for Discovery: 0 00:27:07.899 Transport Requirements: 00:27:07.899 Secure Channel: Not Specified 00:27:07.899 Port ID: 1 (0x0001) 00:27:07.899 Controller ID: 65535 (0xffff) 00:27:07.899 Admin Max SQ Size: 32 00:27:07.899 Transport Service Identifier: 4420 00:27:07.899 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:07.899 Transport Address: 10.0.0.1 00:27:07.899 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:08.161 get_feature(0x01) failed 00:27:08.161 get_feature(0x02) failed 00:27:08.161 get_feature(0x04) failed 00:27:08.161 ===================================================== 00:27:08.161 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:08.162 ===================================================== 00:27:08.162 Controller Capabilities/Features 00:27:08.162 ================================ 00:27:08.162 Vendor ID: 0000 00:27:08.162 Subsystem Vendor ID: 0000 00:27:08.162 Serial Number: 199bdae013174d4737c7 00:27:08.162 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:08.162 Firmware Version: 6.8.9-20 00:27:08.162 Recommended Arb Burst: 6 00:27:08.162 IEEE OUI Identifier: 00 00 00 00:27:08.162 Multi-path I/O 00:27:08.162 May have multiple subsystem ports: Yes 00:27:08.162 May have multiple controllers: Yes 00:27:08.162 Associated with SR-IOV VF: No 00:27:08.162 Max Data Transfer Size: Unlimited 00:27:08.162 Max Number of Namespaces: 1024 00:27:08.162 Max Number of I/O Queues: 128 00:27:08.162 NVMe Specification Version (VS): 1.3 00:27:08.162 NVMe Specification Version (Identify): 1.3 00:27:08.162 Maximum Queue Entries: 1024 00:27:08.162 Contiguous Queues Required: No 00:27:08.162 Arbitration Mechanisms Supported 00:27:08.162 Weighted Round Robin: Not Supported 00:27:08.162 Vendor Specific: Not Supported 00:27:08.162 Reset Timeout: 7500 ms 00:27:08.162 Doorbell Stride: 4 bytes 00:27:08.162 NVM Subsystem Reset: Not Supported 00:27:08.162 Command Sets Supported 00:27:08.162 NVM Command Set: Supported 00:27:08.162 Boot Partition: Not Supported 00:27:08.162 Memory Page Size Minimum: 4096 bytes 00:27:08.162 Memory Page Size Maximum: 4096 bytes 00:27:08.162 Persistent Memory Region: Not Supported 00:27:08.162 Optional Asynchronous Events Supported 00:27:08.162 Namespace Attribute Notices: Supported 00:27:08.162 Firmware Activation Notices: Not Supported 00:27:08.162 ANA Change Notices: Supported 00:27:08.162 PLE Aggregate Log Change Notices: Not Supported 00:27:08.162 LBA Status Info Alert Notices: Not Supported 00:27:08.162 EGE Aggregate Log Change Notices: Not Supported 00:27:08.162 Normal NVM Subsystem Shutdown event: Not Supported 00:27:08.162 Zone Descriptor Change Notices: Not Supported 00:27:08.162 Discovery Log Change Notices: Not Supported 00:27:08.162 Controller Attributes 00:27:08.162 128-bit Host Identifier: Supported 00:27:08.162 Non-Operational Permissive Mode: Not Supported 00:27:08.162 NVM Sets: Not Supported 00:27:08.162 Read Recovery Levels: Not Supported 00:27:08.162 Endurance Groups: Not Supported 00:27:08.162 Predictable Latency Mode: Not Supported 00:27:08.162 Traffic Based Keep ALive: Supported 00:27:08.162 Namespace Granularity: Not Supported 00:27:08.162 SQ Associations: Not Supported 00:27:08.162 UUID List: Not Supported 00:27:08.162 Multi-Domain Subsystem: Not Supported 00:27:08.162 Fixed Capacity Management: Not Supported 00:27:08.162 Variable Capacity Management: Not Supported 00:27:08.162 Delete Endurance Group: Not Supported 00:27:08.162 Delete NVM Set: Not Supported 00:27:08.162 Extended LBA Formats Supported: Not Supported 00:27:08.162 Flexible Data Placement Supported: Not Supported 00:27:08.162 00:27:08.162 Controller Memory Buffer Support 00:27:08.162 ================================ 00:27:08.162 Supported: No 00:27:08.162 00:27:08.162 Persistent Memory Region Support 00:27:08.162 ================================ 00:27:08.162 Supported: No 00:27:08.162 00:27:08.162 Admin Command Set Attributes 00:27:08.162 ============================ 00:27:08.162 Security Send/Receive: Not Supported 00:27:08.162 Format NVM: Not Supported 00:27:08.162 Firmware Activate/Download: Not Supported 00:27:08.162 Namespace Management: Not Supported 00:27:08.162 Device Self-Test: Not Supported 00:27:08.162 Directives: Not Supported 00:27:08.162 NVMe-MI: Not Supported 00:27:08.162 Virtualization Management: Not Supported 00:27:08.162 Doorbell Buffer Config: Not Supported 00:27:08.162 Get LBA Status Capability: Not Supported 00:27:08.162 Command & Feature Lockdown Capability: Not Supported 00:27:08.162 Abort Command Limit: 4 00:27:08.162 Async Event Request Limit: 4 00:27:08.162 Number of Firmware Slots: N/A 00:27:08.162 Firmware Slot 1 Read-Only: N/A 00:27:08.162 Firmware Activation Without Reset: N/A 00:27:08.162 Multiple Update Detection Support: N/A 00:27:08.162 Firmware Update Granularity: No Information Provided 00:27:08.162 Per-Namespace SMART Log: Yes 00:27:08.162 Asymmetric Namespace Access Log Page: Supported 00:27:08.162 ANA Transition Time : 10 sec 00:27:08.162 00:27:08.162 Asymmetric Namespace Access Capabilities 00:27:08.162 ANA Optimized State : Supported 00:27:08.162 ANA Non-Optimized State : Supported 00:27:08.162 ANA Inaccessible State : Supported 00:27:08.162 ANA Persistent Loss State : Supported 00:27:08.162 ANA Change State : Supported 00:27:08.162 ANAGRPID is not changed : No 00:27:08.162 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:08.162 00:27:08.162 ANA Group Identifier Maximum : 128 00:27:08.162 Number of ANA Group Identifiers : 128 00:27:08.162 Max Number of Allowed Namespaces : 1024 00:27:08.162 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:08.162 Command Effects Log Page: Supported 00:27:08.162 Get Log Page Extended Data: Supported 00:27:08.162 Telemetry Log Pages: Not Supported 00:27:08.162 Persistent Event Log Pages: Not Supported 00:27:08.162 Supported Log Pages Log Page: May Support 00:27:08.162 Commands Supported & Effects Log Page: Not Supported 00:27:08.162 Feature Identifiers & Effects Log Page:May Support 00:27:08.162 NVMe-MI Commands & Effects Log Page: May Support 00:27:08.162 Data Area 4 for Telemetry Log: Not Supported 00:27:08.162 Error Log Page Entries Supported: 128 00:27:08.162 Keep Alive: Supported 00:27:08.162 Keep Alive Granularity: 1000 ms 00:27:08.162 00:27:08.162 NVM Command Set Attributes 00:27:08.162 ========================== 00:27:08.162 Submission Queue Entry Size 00:27:08.162 Max: 64 00:27:08.162 Min: 64 00:27:08.162 Completion Queue Entry Size 00:27:08.162 Max: 16 00:27:08.162 Min: 16 00:27:08.162 Number of Namespaces: 1024 00:27:08.162 Compare Command: Not Supported 00:27:08.162 Write Uncorrectable Command: Not Supported 00:27:08.162 Dataset Management Command: Supported 00:27:08.162 Write Zeroes Command: Supported 00:27:08.162 Set Features Save Field: Not Supported 00:27:08.162 Reservations: Not Supported 00:27:08.162 Timestamp: Not Supported 00:27:08.162 Copy: Not Supported 00:27:08.162 Volatile Write Cache: Present 00:27:08.162 Atomic Write Unit (Normal): 1 00:27:08.162 Atomic Write Unit (PFail): 1 00:27:08.162 Atomic Compare & Write Unit: 1 00:27:08.162 Fused Compare & Write: Not Supported 00:27:08.162 Scatter-Gather List 00:27:08.162 SGL Command Set: Supported 00:27:08.162 SGL Keyed: Not Supported 00:27:08.162 SGL Bit Bucket Descriptor: Not Supported 00:27:08.162 SGL Metadata Pointer: Not Supported 00:27:08.162 Oversized SGL: Not Supported 00:27:08.162 SGL Metadata Address: Not Supported 00:27:08.162 SGL Offset: Supported 00:27:08.162 Transport SGL Data Block: Not Supported 00:27:08.162 Replay Protected Memory Block: Not Supported 00:27:08.162 00:27:08.162 Firmware Slot Information 00:27:08.162 ========================= 00:27:08.162 Active slot: 0 00:27:08.162 00:27:08.162 Asymmetric Namespace Access 00:27:08.162 =========================== 00:27:08.162 Change Count : 0 00:27:08.162 Number of ANA Group Descriptors : 1 00:27:08.162 ANA Group Descriptor : 0 00:27:08.162 ANA Group ID : 1 00:27:08.162 Number of NSID Values : 1 00:27:08.162 Change Count : 0 00:27:08.162 ANA State : 1 00:27:08.162 Namespace Identifier : 1 00:27:08.162 00:27:08.162 Commands Supported and Effects 00:27:08.162 ============================== 00:27:08.162 Admin Commands 00:27:08.162 -------------- 00:27:08.162 Get Log Page (02h): Supported 00:27:08.162 Identify (06h): Supported 00:27:08.162 Abort (08h): Supported 00:27:08.162 Set Features (09h): Supported 00:27:08.162 Get Features (0Ah): Supported 00:27:08.162 Asynchronous Event Request (0Ch): Supported 00:27:08.162 Keep Alive (18h): Supported 00:27:08.162 I/O Commands 00:27:08.162 ------------ 00:27:08.162 Flush (00h): Supported 00:27:08.162 Write (01h): Supported LBA-Change 00:27:08.162 Read (02h): Supported 00:27:08.162 Write Zeroes (08h): Supported LBA-Change 00:27:08.162 Dataset Management (09h): Supported 00:27:08.162 00:27:08.162 Error Log 00:27:08.162 ========= 00:27:08.162 Entry: 0 00:27:08.162 Error Count: 0x3 00:27:08.162 Submission Queue Id: 0x0 00:27:08.162 Command Id: 0x5 00:27:08.162 Phase Bit: 0 00:27:08.162 Status Code: 0x2 00:27:08.162 Status Code Type: 0x0 00:27:08.162 Do Not Retry: 1 00:27:08.162 Error Location: 0x28 00:27:08.162 LBA: 0x0 00:27:08.162 Namespace: 0x0 00:27:08.162 Vendor Log Page: 0x0 00:27:08.162 ----------- 00:27:08.162 Entry: 1 00:27:08.162 Error Count: 0x2 00:27:08.162 Submission Queue Id: 0x0 00:27:08.162 Command Id: 0x5 00:27:08.163 Phase Bit: 0 00:27:08.163 Status Code: 0x2 00:27:08.163 Status Code Type: 0x0 00:27:08.163 Do Not Retry: 1 00:27:08.163 Error Location: 0x28 00:27:08.163 LBA: 0x0 00:27:08.163 Namespace: 0x0 00:27:08.163 Vendor Log Page: 0x0 00:27:08.163 ----------- 00:27:08.163 Entry: 2 00:27:08.163 Error Count: 0x1 00:27:08.163 Submission Queue Id: 0x0 00:27:08.163 Command Id: 0x4 00:27:08.163 Phase Bit: 0 00:27:08.163 Status Code: 0x2 00:27:08.163 Status Code Type: 0x0 00:27:08.163 Do Not Retry: 1 00:27:08.163 Error Location: 0x28 00:27:08.163 LBA: 0x0 00:27:08.163 Namespace: 0x0 00:27:08.163 Vendor Log Page: 0x0 00:27:08.163 00:27:08.163 Number of Queues 00:27:08.163 ================ 00:27:08.163 Number of I/O Submission Queues: 128 00:27:08.163 Number of I/O Completion Queues: 128 00:27:08.163 00:27:08.163 ZNS Specific Controller Data 00:27:08.163 ============================ 00:27:08.163 Zone Append Size Limit: 0 00:27:08.163 00:27:08.163 00:27:08.163 Active Namespaces 00:27:08.163 ================= 00:27:08.163 get_feature(0x05) failed 00:27:08.163 Namespace ID:1 00:27:08.163 Command Set Identifier: NVM (00h) 00:27:08.163 Deallocate: Supported 00:27:08.163 Deallocated/Unwritten Error: Not Supported 00:27:08.163 Deallocated Read Value: Unknown 00:27:08.163 Deallocate in Write Zeroes: Not Supported 00:27:08.163 Deallocated Guard Field: 0xFFFF 00:27:08.163 Flush: Supported 00:27:08.163 Reservation: Not Supported 00:27:08.163 Namespace Sharing Capabilities: Multiple Controllers 00:27:08.163 Size (in LBAs): 3750748848 (1788GiB) 00:27:08.163 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:08.163 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:08.163 UUID: 62891e7f-7f3b-4c7f-b372-245cdf5a4781 00:27:08.163 Thin Provisioning: Not Supported 00:27:08.163 Per-NS Atomic Units: Yes 00:27:08.163 Atomic Write Unit (Normal): 8 00:27:08.163 Atomic Write Unit (PFail): 8 00:27:08.163 Preferred Write Granularity: 8 00:27:08.163 Atomic Compare & Write Unit: 8 00:27:08.163 Atomic Boundary Size (Normal): 0 00:27:08.163 Atomic Boundary Size (PFail): 0 00:27:08.163 Atomic Boundary Offset: 0 00:27:08.163 NGUID/EUI64 Never Reused: No 00:27:08.163 ANA group ID: 1 00:27:08.163 Namespace Write Protected: No 00:27:08.163 Number of LBA Formats: 1 00:27:08.163 Current LBA Format: LBA Format #00 00:27:08.163 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:08.163 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:08.163 rmmod nvme_tcp 00:27:08.163 rmmod nvme_fabrics 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.163 15:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.153 15:39:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:10.153 15:39:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:10.153 15:39:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:10.153 15:39:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:10.153 15:39:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:10.153 15:39:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:10.153 15:39:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:10.153 15:39:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:10.153 15:39:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:10.153 15:39:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:10.414 15:39:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:13.714 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:13.714 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:13.714 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:13.714 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:13.975 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:13.975 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:13.975 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:13.975 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:13.975 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:13.975 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:13.975 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:13.975 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:13.975 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:13.975 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:13.975 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:13.975 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:13.975 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:14.545 00:27:14.545 real 0m19.929s 00:27:14.545 user 0m5.321s 00:27:14.545 sys 0m11.557s 00:27:14.545 15:39:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:14.545 15:39:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:14.545 ************************************ 00:27:14.545 END TEST nvmf_identify_kernel_target 00:27:14.545 ************************************ 00:27:14.545 15:39:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:14.545 15:39:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:14.545 15:39:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:14.545 15:39:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.545 ************************************ 00:27:14.545 START TEST nvmf_auth_host 00:27:14.545 ************************************ 00:27:14.545 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:14.545 * Looking for test storage... 00:27:14.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:14.545 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:14.545 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:27:14.545 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:14.806 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:14.806 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:14.806 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:14.806 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:14.806 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:14.806 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:14.806 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:14.806 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:14.806 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:14.806 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:14.806 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:14.806 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:14.806 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:14.806 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:14.806 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:14.806 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:14.806 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:14.806 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:14.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.807 --rc genhtml_branch_coverage=1 00:27:14.807 --rc genhtml_function_coverage=1 00:27:14.807 --rc genhtml_legend=1 00:27:14.807 --rc geninfo_all_blocks=1 00:27:14.807 --rc geninfo_unexecuted_blocks=1 00:27:14.807 00:27:14.807 ' 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:14.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.807 --rc genhtml_branch_coverage=1 00:27:14.807 --rc genhtml_function_coverage=1 00:27:14.807 --rc genhtml_legend=1 00:27:14.807 --rc geninfo_all_blocks=1 00:27:14.807 --rc geninfo_unexecuted_blocks=1 00:27:14.807 00:27:14.807 ' 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:14.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.807 --rc genhtml_branch_coverage=1 00:27:14.807 --rc genhtml_function_coverage=1 00:27:14.807 --rc genhtml_legend=1 00:27:14.807 --rc geninfo_all_blocks=1 00:27:14.807 --rc geninfo_unexecuted_blocks=1 00:27:14.807 00:27:14.807 ' 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:14.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.807 --rc genhtml_branch_coverage=1 00:27:14.807 --rc genhtml_function_coverage=1 00:27:14.807 --rc genhtml_legend=1 00:27:14.807 --rc geninfo_all_blocks=1 00:27:14.807 --rc geninfo_unexecuted_blocks=1 00:27:14.807 00:27:14.807 ' 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:14.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:14.807 15:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:22.950 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:22.950 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:22.950 Found net devices under 0000:31:00.0: cvl_0_0 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:22.950 Found net devices under 0000:31:00.1: cvl_0_1 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:22.950 15:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:22.950 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:22.950 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:22.950 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:22.950 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:22.950 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:22.950 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:22.950 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:22.950 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:22.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:22.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:27:22.950 00:27:22.950 --- 10.0.0.2 ping statistics --- 00:27:22.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.950 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:27:22.950 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:22.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:22.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:27:22.951 00:27:22.951 --- 10.0.0.1 ping statistics --- 00:27:22.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.951 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3931824 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3931824 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3931824 ']' 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:22.951 15:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.212 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:23.212 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:23.212 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:23.212 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:23.212 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.212 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:23.212 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:23.212 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:23.212 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:23.212 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.212 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:23.212 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:23.213 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:23.213 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:23.213 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=818b92247c3aa0378a6505eed060060a 00:27:23.213 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:23.213 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kwH 00:27:23.213 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 818b92247c3aa0378a6505eed060060a 0 00:27:23.213 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 818b92247c3aa0378a6505eed060060a 0 00:27:23.213 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:23.213 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:23.213 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=818b92247c3aa0378a6505eed060060a 00:27:23.213 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:23.213 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:23.213 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kwH 00:27:23.473 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kwH 00:27:23.473 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.kwH 00:27:23.473 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d23b12197c18c065be6973c34ecda10e4a99217b33cb887b8b70ee759e00c4cc 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.gul 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d23b12197c18c065be6973c34ecda10e4a99217b33cb887b8b70ee759e00c4cc 3 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d23b12197c18c065be6973c34ecda10e4a99217b33cb887b8b70ee759e00c4cc 3 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d23b12197c18c065be6973c34ecda10e4a99217b33cb887b8b70ee759e00c4cc 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.gul 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.gul 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.gul 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2b1cd9cf3b169548ddf750bb5241e9b0279d1b00bd8ca0b2 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.aEV 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2b1cd9cf3b169548ddf750bb5241e9b0279d1b00bd8ca0b2 0 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2b1cd9cf3b169548ddf750bb5241e9b0279d1b00bd8ca0b2 0 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2b1cd9cf3b169548ddf750bb5241e9b0279d1b00bd8ca0b2 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.aEV 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.aEV 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.aEV 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dc4bbe883d058f047abb47972b876735dc39c4ac48578c45 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.o7E 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dc4bbe883d058f047abb47972b876735dc39c4ac48578c45 2 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dc4bbe883d058f047abb47972b876735dc39c4ac48578c45 2 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dc4bbe883d058f047abb47972b876735dc39c4ac48578c45 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.o7E 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.o7E 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.o7E 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=22f4b01a64f85f01d9e4f1a52cdd7436 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.jyN 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 22f4b01a64f85f01d9e4f1a52cdd7436 1 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 22f4b01a64f85f01d9e4f1a52cdd7436 1 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=22f4b01a64f85f01d9e4f1a52cdd7436 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:23.474 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:23.735 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.jyN 00:27:23.735 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.jyN 00:27:23.735 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.jyN 00:27:23.735 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:23.735 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:23.735 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.735 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:23.735 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:23.735 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:23.735 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:23.735 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=31f154c244d7232de8e711d19de0ecdd 00:27:23.735 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:23.735 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.nVL 00:27:23.735 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 31f154c244d7232de8e711d19de0ecdd 1 00:27:23.735 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 31f154c244d7232de8e711d19de0ecdd 1 00:27:23.735 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:23.735 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:23.735 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=31f154c244d7232de8e711d19de0ecdd 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.nVL 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.nVL 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.nVL 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c16468c60e6bdcb27c65a12b6eaeedc7808dcb69f5f9dc18 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.jGR 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c16468c60e6bdcb27c65a12b6eaeedc7808dcb69f5f9dc18 2 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c16468c60e6bdcb27c65a12b6eaeedc7808dcb69f5f9dc18 2 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c16468c60e6bdcb27c65a12b6eaeedc7808dcb69f5f9dc18 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.jGR 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.jGR 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.jGR 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=139ae6d9d95df7e07b16193aa1768ec1 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kzf 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 139ae6d9d95df7e07b16193aa1768ec1 0 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 139ae6d9d95df7e07b16193aa1768ec1 0 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=139ae6d9d95df7e07b16193aa1768ec1 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kzf 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kzf 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.kzf 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=76c5497164f17d61e7dee0f317b5995ff299eeb86ccc51ec849a7896eea6c7dc 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.vZT 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 76c5497164f17d61e7dee0f317b5995ff299eeb86ccc51ec849a7896eea6c7dc 3 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 76c5497164f17d61e7dee0f317b5995ff299eeb86ccc51ec849a7896eea6c7dc 3 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=76c5497164f17d61e7dee0f317b5995ff299eeb86ccc51ec849a7896eea6c7dc 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:23.736 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.vZT 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.vZT 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.vZT 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3931824 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3931824 ']' 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kwH 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.gul ]] 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gul 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.aEV 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.o7E ]] 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.o7E 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.998 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.260 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.260 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:24.260 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.jyN 00:27:24.260 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.260 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.260 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.260 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.nVL ]] 00:27:24.260 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nVL 00:27:24.260 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.260 15:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.jGR 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.kzf ]] 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.kzf 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.vZT 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:24.260 15:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:27.564 Waiting for block devices as requested 00:27:27.564 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:27.824 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:27.824 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:27.824 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:28.085 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:28.085 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:28.085 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:28.346 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:28.346 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:28.608 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:28.608 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:28.608 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:28.608 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:28.869 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:28.869 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:28.869 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:28.869 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:29.811 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:29.811 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:29.811 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:29.811 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:29.811 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:29.811 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:29.811 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:29.811 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:29.811 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:29.811 No valid GPT data, bailing 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:27:30.072 00:27:30.072 Discovery Log Number of Records 2, Generation counter 2 00:27:30.072 =====Discovery Log Entry 0====== 00:27:30.072 trtype: tcp 00:27:30.072 adrfam: ipv4 00:27:30.072 subtype: current discovery subsystem 00:27:30.072 treq: not specified, sq flow control disable supported 00:27:30.072 portid: 1 00:27:30.072 trsvcid: 4420 00:27:30.072 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:30.072 traddr: 10.0.0.1 00:27:30.072 eflags: none 00:27:30.072 sectype: none 00:27:30.072 =====Discovery Log Entry 1====== 00:27:30.072 trtype: tcp 00:27:30.072 adrfam: ipv4 00:27:30.072 subtype: nvme subsystem 00:27:30.072 treq: not specified, sq flow control disable supported 00:27:30.072 portid: 1 00:27:30.072 trsvcid: 4420 00:27:30.072 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:30.072 traddr: 10.0.0.1 00:27:30.072 eflags: none 00:27:30.072 sectype: none 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:30.072 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: ]] 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.073 15:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.333 nvme0n1 00:27:30.333 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.333 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.333 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.333 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.333 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.333 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.333 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.333 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.333 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.333 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.333 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.333 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:30.333 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.333 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.333 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:30.333 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.333 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.333 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.333 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: ]] 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.334 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.595 nvme0n1 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: ]] 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.595 nvme0n1 00:27:30.595 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: ]] 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.857 nvme0n1 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.857 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.117 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.117 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.117 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.117 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.117 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.117 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.117 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:31.117 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.117 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.117 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.117 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.117 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:31.117 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:31.117 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.117 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.117 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:31.117 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: ]] 00:27:31.117 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:31.117 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:31.117 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.117 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.118 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.118 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.118 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.118 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:31.118 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.118 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.118 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.118 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.118 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.118 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.118 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.118 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.118 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.118 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.118 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.118 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.118 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.118 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.118 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.118 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.118 15:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.118 nvme0n1 00:27:31.118 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.118 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.118 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.118 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.118 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.118 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.118 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.118 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.118 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.118 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.379 nvme0n1 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: ]] 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.379 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.640 nvme0n1 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: ]] 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.640 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.907 nvme0n1 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: ]] 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.907 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.908 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.908 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.908 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.908 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.908 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.908 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.908 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.908 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.908 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.908 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.908 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.908 15:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.169 nvme0n1 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: ]] 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:32.169 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.170 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.430 nvme0n1 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.430 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.431 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.431 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.431 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.431 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.431 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.691 nvme0n1 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: ]] 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.691 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.951 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.951 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.951 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.951 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.951 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.952 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.952 nvme0n1 00:27:32.952 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: ]] 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.212 15:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.212 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.212 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.212 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.212 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.212 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.212 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.212 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.212 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.212 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.212 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.212 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.212 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.212 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.212 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.212 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.474 nvme0n1 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: ]] 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.474 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.735 nvme0n1 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: ]] 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.735 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.736 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.736 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.736 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.736 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.736 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.736 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.736 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.736 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.736 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.736 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.736 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:33.736 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.736 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.996 nvme0n1 00:27:33.996 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.996 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.996 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.996 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.996 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.996 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.257 15:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.257 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.518 nvme0n1 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: ]] 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.518 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.088 nvme0n1 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: ]] 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.088 15:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.349 nvme0n1 00:27:35.349 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.349 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.349 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.349 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.349 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.349 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.349 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.349 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.349 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.349 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: ]] 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.610 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.871 nvme0n1 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: ]] 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.871 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.131 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.131 15:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.390 nvme0n1 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.390 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.961 nvme0n1 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: ]] 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.961 15:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.530 nvme0n1 00:27:37.530 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.530 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.530 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.530 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.530 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.530 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.790 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.790 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.790 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.790 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.790 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.790 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.790 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:37.790 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: ]] 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.791 15:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.361 nvme0n1 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: ]] 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.361 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.362 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:38.362 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.362 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.362 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.362 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.362 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.362 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.362 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.362 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.362 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.362 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.362 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.362 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.362 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.362 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.362 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.362 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.362 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.303 nvme0n1 00:27:39.303 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.303 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.303 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.303 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.303 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.303 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.303 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.303 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.303 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.303 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.303 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.303 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.303 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:39.303 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.303 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.303 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.303 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:39.303 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:39.303 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:39.303 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.303 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.304 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:39.304 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: ]] 00:27:39.304 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:39.304 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:39.304 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.304 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:39.304 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.304 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:39.304 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.304 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:39.304 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.304 15:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.304 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.304 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.304 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.304 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.304 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.304 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.304 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.304 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.304 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.304 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.304 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.304 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.304 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:39.304 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.304 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.876 nvme0n1 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.876 15:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.447 nvme0n1 00:27:40.447 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.447 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.447 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.447 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.447 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.447 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.447 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.447 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.447 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.447 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: ]] 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.709 nvme0n1 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: ]] 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.709 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.971 nvme0n1 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: ]] 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.971 15:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.232 nvme0n1 00:27:41.232 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.232 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.232 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.232 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.232 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.232 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.232 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.232 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.232 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.232 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.232 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.232 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.232 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:41.232 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.232 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.232 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.232 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:41.232 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:41.232 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:41.232 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: ]] 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.233 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.494 nvme0n1 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.494 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.755 nvme0n1 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: ]] 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.755 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.756 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.756 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.756 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.756 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.756 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.756 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.756 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.756 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.756 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.756 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.756 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:41.756 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.756 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.017 nvme0n1 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: ]] 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.017 15:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.278 nvme0n1 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: ]] 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.278 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.539 nvme0n1 00:27:42.539 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.539 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.539 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.539 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.539 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.539 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.539 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: ]] 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.540 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.801 nvme0n1 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.801 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.063 nvme0n1 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: ]] 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.063 15:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.324 nvme0n1 00:27:43.324 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.324 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.324 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.324 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.324 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.324 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.324 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.324 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.324 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.324 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.324 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.324 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.324 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:43.324 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.324 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.324 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.324 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.324 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:43.324 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:43.324 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: ]] 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.325 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.585 nvme0n1 00:27:43.585 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.585 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.585 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.585 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.585 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.585 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: ]] 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.846 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.107 nvme0n1 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: ]] 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:44.107 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.108 15:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.369 nvme0n1 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.369 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.630 nvme0n1 00:27:44.630 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.630 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.630 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.630 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.630 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.630 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.630 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.630 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.630 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.630 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: ]] 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.891 15:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.151 nvme0n1 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: ]] 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.151 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.412 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.412 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.412 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.412 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.412 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.412 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.412 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.412 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.412 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.412 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.412 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.412 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.412 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:45.412 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.412 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.673 nvme0n1 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: ]] 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.674 15:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.246 nvme0n1 00:27:46.246 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.246 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.246 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.246 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.246 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.246 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.246 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.246 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.246 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.246 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.246 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.246 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.246 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:46.246 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.246 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.246 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.246 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:46.246 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: ]] 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.247 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.817 nvme0n1 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.817 15:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.078 nvme0n1 00:27:47.078 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.078 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.078 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.078 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.078 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: ]] 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.340 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.913 nvme0n1 00:27:47.913 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.913 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.913 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.913 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.913 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.913 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.913 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.913 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.913 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.913 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.913 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.913 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.913 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:47.913 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.913 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.913 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:47.913 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.913 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:47.913 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:47.913 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.913 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: ]] 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.914 15:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.854 nvme0n1 00:27:48.854 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.854 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.854 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.854 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.854 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.854 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.854 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.854 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.854 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.854 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.854 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: ]] 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.855 15:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.481 nvme0n1 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: ]] 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:49.481 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.482 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:49.482 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.482 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.482 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.482 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.482 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.482 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.482 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.482 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.482 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.482 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.482 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.482 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.482 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.482 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.482 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:49.482 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.482 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.053 nvme0n1 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.053 15:40:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:50.053 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.053 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.053 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.053 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.053 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.053 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.053 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.053 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.053 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.053 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.053 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.053 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.053 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.053 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.053 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:50.053 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.053 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.996 nvme0n1 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: ]] 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.996 nvme0n1 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:50.996 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: ]] 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.997 15:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.257 nvme0n1 00:27:51.257 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.257 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.257 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.257 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.257 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.257 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.257 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.257 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.257 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.257 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.257 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.257 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.257 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:51.257 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.257 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.257 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.257 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: ]] 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.258 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.518 nvme0n1 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: ]] 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:51.518 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.519 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.519 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.519 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.519 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.519 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.519 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.519 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.519 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.519 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.519 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.519 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.519 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.519 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.519 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:51.519 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.519 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.782 nvme0n1 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.782 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.050 nvme0n1 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: ]] 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.050 15:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.311 nvme0n1 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: ]] 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.311 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.572 nvme0n1 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: ]] 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.572 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.833 nvme0n1 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: ]] 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.833 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.094 nvme0n1 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.094 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.095 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.095 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.095 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.095 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.095 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.095 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.095 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:53.095 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.095 15:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.355 nvme0n1 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: ]] 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.355 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.616 nvme0n1 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: ]] 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.616 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.617 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.617 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.617 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.617 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.617 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.617 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.617 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.617 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.617 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.877 nvme0n1 00:27:53.877 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.877 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.877 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.877 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.877 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.877 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: ]] 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:54.138 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.139 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:54.139 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.139 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.139 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.139 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.139 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.139 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.139 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.139 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.139 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.139 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.139 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.139 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.139 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.139 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.139 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:54.139 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.139 15:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.400 nvme0n1 00:27:54.400 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.400 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.400 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.400 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.400 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.400 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.400 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.400 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.400 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.400 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.400 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.400 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.400 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:54.400 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.400 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.400 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.400 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:54.400 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: ]] 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.401 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.674 nvme0n1 00:27:54.674 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.674 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.674 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.674 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.674 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.674 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.674 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.674 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.674 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.674 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.674 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.674 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.674 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:54.674 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.674 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.675 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.977 nvme0n1 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: ]] 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.977 15:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.625 nvme0n1 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: ]] 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.625 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.886 nvme0n1 00:27:55.886 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.886 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.886 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.886 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.886 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.886 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: ]] 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.146 15:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.407 nvme0n1 00:27:56.407 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.407 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.407 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.407 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.407 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.407 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.407 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.407 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.407 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: ]] 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.668 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.928 nvme0n1 00:27:56.928 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.928 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.928 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.928 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.928 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.928 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.928 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.928 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.928 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.928 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.929 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.929 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.929 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:56.929 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.929 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.929 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.929 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:56.929 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:56.929 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:56.929 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.929 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:56.929 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:27:56.929 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:56.929 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:56.929 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.929 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.929 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:56.929 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:56.929 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.188 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:57.188 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.188 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.188 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.188 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.189 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.189 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.189 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.189 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.189 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.189 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.189 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.189 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.189 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.189 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.189 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.189 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.189 15:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.450 nvme0n1 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODE4YjkyMjQ3YzNhYTAzNzhhNjUwNWVlZDA2MDA2MGEyvLz0: 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: ]] 00:27:57.450 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDIzYjEyMTk3YzE4YzA2NWJlNjk3M2MzNGVjZGExMGU0YTk5MjE3YjMzY2I4ODdiOGI3MGVlNzU5ZTAwYzRjY/0l08s=: 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.451 15:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.393 nvme0n1 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: ]] 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.393 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.964 nvme0n1 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: ]] 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.964 15:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.535 nvme0n1 00:27:59.535 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzE2NDY4YzYwZTZiZGNiMjdjNjVhMTJiNmVhZWVkYzc4MDhkY2I2OWY1ZjlkYzE4/XhqKw==: 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: ]] 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTM5YWU2ZDlkOTVkZjdlMDdiMTYxOTNhYTE3NjhlYzFl1ACf: 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.796 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.797 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.797 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.797 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.797 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.797 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.797 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.797 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.797 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.797 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.797 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.797 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:59.797 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.797 15:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.420 nvme0n1 00:28:00.420 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.420 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.420 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.420 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.420 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.420 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzZjNTQ5NzE2NGYxN2Q2MWU3ZGVlMGYzMTdiNTk5NWZmMjk5ZWViODZjY2M1MWVjODQ5YTc4OTZlZWE2YzdkY+XuZB0=: 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.421 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.991 nvme0n1 00:28:00.991 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.251 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.251 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.251 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.251 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.251 15:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.251 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.251 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.251 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.251 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.251 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.251 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:01.251 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.251 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.251 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:01.251 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:01.251 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:28:01.251 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:28:01.251 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: ]] 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.252 request: 00:28:01.252 { 00:28:01.252 "name": "nvme0", 00:28:01.252 "trtype": "tcp", 00:28:01.252 "traddr": "10.0.0.1", 00:28:01.252 "adrfam": "ipv4", 00:28:01.252 "trsvcid": "4420", 00:28:01.252 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:01.252 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:01.252 "prchk_reftag": false, 00:28:01.252 "prchk_guard": false, 00:28:01.252 "hdgst": false, 00:28:01.252 "ddgst": false, 00:28:01.252 "allow_unrecognized_csi": false, 00:28:01.252 "method": "bdev_nvme_attach_controller", 00:28:01.252 "req_id": 1 00:28:01.252 } 00:28:01.252 Got JSON-RPC error response 00:28:01.252 response: 00:28:01.252 { 00:28:01.252 "code": -5, 00:28:01.252 "message": "Input/output error" 00:28:01.252 } 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.252 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.513 request: 00:28:01.513 { 00:28:01.513 "name": "nvme0", 00:28:01.513 "trtype": "tcp", 00:28:01.513 "traddr": "10.0.0.1", 00:28:01.513 "adrfam": "ipv4", 00:28:01.513 "trsvcid": "4420", 00:28:01.513 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:01.513 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:01.513 "prchk_reftag": false, 00:28:01.513 "prchk_guard": false, 00:28:01.513 "hdgst": false, 00:28:01.513 "ddgst": false, 00:28:01.513 "dhchap_key": "key2", 00:28:01.513 "allow_unrecognized_csi": false, 00:28:01.513 "method": "bdev_nvme_attach_controller", 00:28:01.513 "req_id": 1 00:28:01.513 } 00:28:01.513 Got JSON-RPC error response 00:28:01.513 response: 00:28:01.513 { 00:28:01.513 "code": -5, 00:28:01.513 "message": "Input/output error" 00:28:01.513 } 00:28:01.513 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:01.513 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:01.513 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:01.513 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:01.513 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:01.513 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.513 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:01.513 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.513 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.513 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.513 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:01.513 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:01.513 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.513 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.513 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.513 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.514 request: 00:28:01.514 { 00:28:01.514 "name": "nvme0", 00:28:01.514 "trtype": "tcp", 00:28:01.514 "traddr": "10.0.0.1", 00:28:01.514 "adrfam": "ipv4", 00:28:01.514 "trsvcid": "4420", 00:28:01.514 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:01.514 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:01.514 "prchk_reftag": false, 00:28:01.514 "prchk_guard": false, 00:28:01.514 "hdgst": false, 00:28:01.514 "ddgst": false, 00:28:01.514 "dhchap_key": "key1", 00:28:01.514 "dhchap_ctrlr_key": "ckey2", 00:28:01.514 "allow_unrecognized_csi": false, 00:28:01.514 "method": "bdev_nvme_attach_controller", 00:28:01.514 "req_id": 1 00:28:01.514 } 00:28:01.514 Got JSON-RPC error response 00:28:01.514 response: 00:28:01.514 { 00:28:01.514 "code": -5, 00:28:01.514 "message": "Input/output error" 00:28:01.514 } 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.514 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.775 nvme0n1 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: ]] 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.775 request: 00:28:01.775 { 00:28:01.775 "name": "nvme0", 00:28:01.775 "dhchap_key": "key1", 00:28:01.775 "dhchap_ctrlr_key": "ckey2", 00:28:01.775 "method": "bdev_nvme_set_keys", 00:28:01.775 "req_id": 1 00:28:01.775 } 00:28:01.775 Got JSON-RPC error response 00:28:01.775 response: 00:28:01.775 { 00:28:01.775 "code": -13, 00:28:01.775 "message": "Permission denied" 00:28:01.775 } 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.775 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.035 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:02.035 15:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:02.975 15:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.975 15:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:02.975 15:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.975 15:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.975 15:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.975 15:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:02.975 15:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmIxY2Q5Y2YzYjE2OTU0OGRkZjc1MGJiNTI0MWU5YjAyNzlkMWIwMGJkOGNhMGIy5UAz8w==: 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: ]] 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGM0YmJlODgzZDA1OGYwNDdhYmI0Nzk3MmI4NzY3MzVkYzM5YzRhYzQ4NTc4YzQ1lJTN0w==: 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.917 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.918 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.918 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:03.918 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.918 15:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.178 nvme0n1 00:28:04.178 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.178 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:04.178 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJmNGIwMWE2NGY4NWYwMWQ5ZTRmMWE1MmNkZDc0MzZmIflZ: 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: ]] 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzFmMTU0YzI0NGQ3MjMyZGU4ZTcxMWQxOWRlMGVjZGRNS50P: 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.179 request: 00:28:04.179 { 00:28:04.179 "name": "nvme0", 00:28:04.179 "dhchap_key": "key2", 00:28:04.179 "dhchap_ctrlr_key": "ckey1", 00:28:04.179 "method": "bdev_nvme_set_keys", 00:28:04.179 "req_id": 1 00:28:04.179 } 00:28:04.179 Got JSON-RPC error response 00:28:04.179 response: 00:28:04.179 { 00:28:04.179 "code": -13, 00:28:04.179 "message": "Permission denied" 00:28:04.179 } 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:04.179 15:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:05.596 rmmod nvme_tcp 00:28:05.596 rmmod nvme_fabrics 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3931824 ']' 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3931824 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 3931824 ']' 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 3931824 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3931824 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3931824' 00:28:05.596 killing process with pid 3931824 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 3931824 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 3931824 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:05.596 15:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.143 15:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:08.143 15:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:08.143 15:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:08.143 15:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:08.143 15:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:08.143 15:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:08.143 15:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:08.143 15:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:08.143 15:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:08.143 15:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:08.143 15:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:08.143 15:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:08.143 15:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:11.446 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:11.446 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:11.446 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:11.446 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:11.446 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:11.446 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:11.446 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:11.446 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:11.446 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:11.447 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:11.447 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:11.447 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:11.447 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:11.447 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:11.447 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:11.447 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:11.447 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:12.017 15:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.kwH /tmp/spdk.key-null.aEV /tmp/spdk.key-sha256.jyN /tmp/spdk.key-sha384.jGR /tmp/spdk.key-sha512.vZT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:12.017 15:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:15.321 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:15.321 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:15.321 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:15.321 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:15.321 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:15.321 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:15.321 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:15.321 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:15.321 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:15.321 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:15.321 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:15.321 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:15.321 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:15.321 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:15.321 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:15.321 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:15.321 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:15.892 00:28:15.892 real 1m1.282s 00:28:15.892 user 0m54.879s 00:28:15.892 sys 0m16.291s 00:28:15.892 15:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:15.892 15:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.893 ************************************ 00:28:15.893 END TEST nvmf_auth_host 00:28:15.893 ************************************ 00:28:15.893 15:40:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:15.893 15:40:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:15.893 15:40:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:15.893 15:40:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:15.893 15:40:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.893 ************************************ 00:28:15.893 START TEST nvmf_digest 00:28:15.893 ************************************ 00:28:15.893 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:15.893 * Looking for test storage... 00:28:15.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:15.893 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:15.893 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:28:15.893 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:16.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.154 --rc genhtml_branch_coverage=1 00:28:16.154 --rc genhtml_function_coverage=1 00:28:16.154 --rc genhtml_legend=1 00:28:16.154 --rc geninfo_all_blocks=1 00:28:16.154 --rc geninfo_unexecuted_blocks=1 00:28:16.154 00:28:16.154 ' 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:16.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.154 --rc genhtml_branch_coverage=1 00:28:16.154 --rc genhtml_function_coverage=1 00:28:16.154 --rc genhtml_legend=1 00:28:16.154 --rc geninfo_all_blocks=1 00:28:16.154 --rc geninfo_unexecuted_blocks=1 00:28:16.154 00:28:16.154 ' 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:16.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.154 --rc genhtml_branch_coverage=1 00:28:16.154 --rc genhtml_function_coverage=1 00:28:16.154 --rc genhtml_legend=1 00:28:16.154 --rc geninfo_all_blocks=1 00:28:16.154 --rc geninfo_unexecuted_blocks=1 00:28:16.154 00:28:16.154 ' 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:16.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.154 --rc genhtml_branch_coverage=1 00:28:16.154 --rc genhtml_function_coverage=1 00:28:16.154 --rc genhtml_legend=1 00:28:16.154 --rc geninfo_all_blocks=1 00:28:16.154 --rc geninfo_unexecuted_blocks=1 00:28:16.154 00:28:16.154 ' 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:16.154 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:16.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:16.155 15:40:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:24.300 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:24.300 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:24.300 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:24.301 Found net devices under 0000:31:00.0: cvl_0_0 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:24.301 Found net devices under 0000:31:00.1: cvl_0_1 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:24.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:28:24.301 00:28:24.301 --- 10.0.0.2 ping statistics --- 00:28:24.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.301 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:24.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:28:24.301 00:28:24.301 --- 10.0.0.1 ping statistics --- 00:28:24.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.301 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:24.301 ************************************ 00:28:24.301 START TEST nvmf_digest_clean 00:28:24.301 ************************************ 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3948949 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3948949 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3948949 ']' 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:24.301 15:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:24.301 [2024-11-06 15:40:41.696399] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:28:24.301 [2024-11-06 15:40:41.696452] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.301 [2024-11-06 15:40:41.795935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.301 [2024-11-06 15:40:41.848287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.301 [2024-11-06 15:40:41.848335] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.301 [2024-11-06 15:40:41.848344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.301 [2024-11-06 15:40:41.848351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.301 [2024-11-06 15:40:41.848359] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.301 [2024-11-06 15:40:41.849164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.563 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:24.563 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:24.563 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:24.563 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:24.563 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:24.824 null0 00:28:24.824 [2024-11-06 15:40:42.644782] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.824 [2024-11-06 15:40:42.669090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3949217 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3949217 /var/tmp/bperf.sock 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3949217 ']' 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:24.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:24.824 15:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:24.824 [2024-11-06 15:40:42.728064] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:28:24.824 [2024-11-06 15:40:42.728127] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3949217 ] 00:28:25.085 [2024-11-06 15:40:42.821307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.085 [2024-11-06 15:40:42.874276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.658 15:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:25.658 15:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:25.658 15:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:25.658 15:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:25.658 15:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:25.919 15:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.919 15:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.491 nvme0n1 00:28:26.491 15:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:26.491 15:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:26.491 Running I/O for 2 seconds... 00:28:28.379 18612.00 IOPS, 72.70 MiB/s [2024-11-06T14:40:46.622Z] 19326.50 IOPS, 75.49 MiB/s 00:28:28.639 Latency(us) 00:28:28.639 [2024-11-06T14:40:46.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.639 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:28.639 nvme0n1 : 2.01 19351.83 75.59 0.00 0.00 6606.31 3112.96 23702.19 00:28:28.639 [2024-11-06T14:40:46.622Z] =================================================================================================================== 00:28:28.639 [2024-11-06T14:40:46.622Z] Total : 19351.83 75.59 0.00 0.00 6606.31 3112.96 23702.19 00:28:28.639 { 00:28:28.639 "results": [ 00:28:28.639 { 00:28:28.639 "job": "nvme0n1", 00:28:28.639 "core_mask": "0x2", 00:28:28.639 "workload": "randread", 00:28:28.639 "status": "finished", 00:28:28.639 "queue_depth": 128, 00:28:28.639 "io_size": 4096, 00:28:28.639 "runtime": 2.006063, 00:28:28.639 "iops": 19351.83491246287, 00:28:28.639 "mibps": 75.59310512680808, 00:28:28.639 "io_failed": 0, 00:28:28.639 "io_timeout": 0, 00:28:28.639 "avg_latency_us": 6606.3090554081555, 00:28:28.639 "min_latency_us": 3112.96, 00:28:28.639 "max_latency_us": 23702.18666666667 00:28:28.639 } 00:28:28.639 ], 00:28:28.639 "core_count": 1 00:28:28.639 } 00:28:28.639 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:28.639 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:28.639 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:28.639 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:28.639 | select(.opcode=="crc32c") 00:28:28.639 | "\(.module_name) \(.executed)"' 00:28:28.639 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:28.639 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:28.639 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:28.639 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:28.639 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:28.639 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3949217 00:28:28.639 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3949217 ']' 00:28:28.639 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3949217 00:28:28.639 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:28.639 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:28.640 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3949217 00:28:28.901 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:28.901 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:28.901 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3949217' 00:28:28.901 killing process with pid 3949217 00:28:28.901 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3949217 00:28:28.901 Received shutdown signal, test time was about 2.000000 seconds 00:28:28.901 00:28:28.901 Latency(us) 00:28:28.901 [2024-11-06T14:40:46.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.901 [2024-11-06T14:40:46.884Z] =================================================================================================================== 00:28:28.901 [2024-11-06T14:40:46.884Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:28.901 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3949217 00:28:28.901 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:28.901 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:28.901 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:28.901 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:28.901 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:28.901 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:28.901 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:28.901 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3949903 00:28:28.901 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3949903 /var/tmp/bperf.sock 00:28:28.901 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3949903 ']' 00:28:28.901 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:28.901 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:28.901 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:28.901 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:28.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:28.901 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:28.901 15:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:28.901 [2024-11-06 15:40:46.801180] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:28:28.901 [2024-11-06 15:40:46.801235] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3949903 ] 00:28:28.901 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:28.901 Zero copy mechanism will not be used. 00:28:29.162 [2024-11-06 15:40:46.891485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.162 [2024-11-06 15:40:46.926211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.734 15:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:29.734 15:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:29.734 15:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:29.734 15:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:29.734 15:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:29.995 15:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:29.995 15:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.257 nvme0n1 00:28:30.257 15:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:30.257 15:40:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:30.517 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:30.517 Zero copy mechanism will not be used. 00:28:30.517 Running I/O for 2 seconds... 00:28:32.398 5923.00 IOPS, 740.38 MiB/s [2024-11-06T14:40:50.381Z] 6271.50 IOPS, 783.94 MiB/s 00:28:32.398 Latency(us) 00:28:32.398 [2024-11-06T14:40:50.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.398 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:32.398 nvme0n1 : 2.00 6270.33 783.79 0.00 0.00 2548.52 464.21 10704.21 00:28:32.398 [2024-11-06T14:40:50.381Z] =================================================================================================================== 00:28:32.398 [2024-11-06T14:40:50.381Z] Total : 6270.33 783.79 0.00 0.00 2548.52 464.21 10704.21 00:28:32.398 { 00:28:32.398 "results": [ 00:28:32.398 { 00:28:32.398 "job": "nvme0n1", 00:28:32.398 "core_mask": "0x2", 00:28:32.398 "workload": "randread", 00:28:32.398 "status": "finished", 00:28:32.398 "queue_depth": 16, 00:28:32.398 "io_size": 131072, 00:28:32.398 "runtime": 2.002924, 00:28:32.398 "iops": 6270.332773485165, 00:28:32.398 "mibps": 783.7915966856456, 00:28:32.398 "io_failed": 0, 00:28:32.398 "io_timeout": 0, 00:28:32.398 "avg_latency_us": 2548.515773548849, 00:28:32.398 "min_latency_us": 464.2133333333333, 00:28:32.398 "max_latency_us": 10704.213333333333 00:28:32.398 } 00:28:32.398 ], 00:28:32.398 "core_count": 1 00:28:32.398 } 00:28:32.398 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:32.398 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:32.398 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:32.398 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:32.398 | select(.opcode=="crc32c") 00:28:32.398 | "\(.module_name) \(.executed)"' 00:28:32.398 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:32.659 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:32.660 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:32.660 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:32.660 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:32.660 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3949903 00:28:32.660 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3949903 ']' 00:28:32.660 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3949903 00:28:32.660 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:32.660 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:32.660 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3949903 00:28:32.660 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:32.660 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:32.660 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3949903' 00:28:32.660 killing process with pid 3949903 00:28:32.660 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3949903 00:28:32.660 Received shutdown signal, test time was about 2.000000 seconds 00:28:32.660 00:28:32.660 Latency(us) 00:28:32.660 [2024-11-06T14:40:50.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.660 [2024-11-06T14:40:50.643Z] =================================================================================================================== 00:28:32.660 [2024-11-06T14:40:50.643Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:32.660 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3949903 00:28:32.921 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:32.921 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:32.921 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:32.921 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:32.921 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:32.921 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:32.921 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:32.921 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3950692 00:28:32.921 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3950692 /var/tmp/bperf.sock 00:28:32.921 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3950692 ']' 00:28:32.921 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:32.921 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:32.921 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:32.921 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:32.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:32.921 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:32.921 15:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:32.921 [2024-11-06 15:40:50.731899] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:28:32.921 [2024-11-06 15:40:50.731957] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3950692 ] 00:28:32.921 [2024-11-06 15:40:50.816207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.921 [2024-11-06 15:40:50.846223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.862 15:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:33.862 15:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:33.862 15:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:33.862 15:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:33.862 15:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:33.862 15:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:33.862 15:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.123 nvme0n1 00:28:34.123 15:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:34.123 15:40:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:34.383 Running I/O for 2 seconds... 00:28:36.263 30138.00 IOPS, 117.73 MiB/s [2024-11-06T14:40:54.246Z] 30290.00 IOPS, 118.32 MiB/s 00:28:36.263 Latency(us) 00:28:36.263 [2024-11-06T14:40:54.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.263 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:36.263 nvme0n1 : 2.00 30304.36 118.38 0.00 0.00 4218.29 1706.67 10485.76 00:28:36.263 [2024-11-06T14:40:54.246Z] =================================================================================================================== 00:28:36.263 [2024-11-06T14:40:54.246Z] Total : 30304.36 118.38 0.00 0.00 4218.29 1706.67 10485.76 00:28:36.263 { 00:28:36.263 "results": [ 00:28:36.263 { 00:28:36.263 "job": "nvme0n1", 00:28:36.263 "core_mask": "0x2", 00:28:36.263 "workload": "randwrite", 00:28:36.263 "status": "finished", 00:28:36.263 "queue_depth": 128, 00:28:36.263 "io_size": 4096, 00:28:36.263 "runtime": 2.004827, 00:28:36.263 "iops": 30304.36042611158, 00:28:36.263 "mibps": 118.37640791449836, 00:28:36.263 "io_failed": 0, 00:28:36.263 "io_timeout": 0, 00:28:36.263 "avg_latency_us": 4218.2934606205245, 00:28:36.263 "min_latency_us": 1706.6666666666667, 00:28:36.263 "max_latency_us": 10485.76 00:28:36.263 } 00:28:36.263 ], 00:28:36.263 "core_count": 1 00:28:36.263 } 00:28:36.263 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:36.263 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:36.263 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:36.263 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:36.263 | select(.opcode=="crc32c") 00:28:36.263 | "\(.module_name) \(.executed)"' 00:28:36.263 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3950692 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3950692 ']' 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3950692 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3950692 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3950692' 00:28:36.524 killing process with pid 3950692 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3950692 00:28:36.524 Received shutdown signal, test time was about 2.000000 seconds 00:28:36.524 00:28:36.524 Latency(us) 00:28:36.524 [2024-11-06T14:40:54.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.524 [2024-11-06T14:40:54.507Z] =================================================================================================================== 00:28:36.524 [2024-11-06T14:40:54.507Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3950692 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3951494 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3951494 /var/tmp/bperf.sock 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3951494 ']' 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:36.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:36.524 15:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:36.784 [2024-11-06 15:40:54.553893] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:28:36.784 [2024-11-06 15:40:54.553950] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3951494 ] 00:28:36.784 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:36.784 Zero copy mechanism will not be used. 00:28:36.784 [2024-11-06 15:40:54.636197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.784 [2024-11-06 15:40:54.665528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.724 15:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:37.724 15:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:37.724 15:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:37.724 15:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:37.724 15:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:37.724 15:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.724 15:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.984 nvme0n1 00:28:37.984 15:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:37.984 15:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:38.245 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:38.245 Zero copy mechanism will not be used. 00:28:38.245 Running I/O for 2 seconds... 00:28:40.125 8230.00 IOPS, 1028.75 MiB/s [2024-11-06T14:40:58.108Z] 7695.50 IOPS, 961.94 MiB/s 00:28:40.125 Latency(us) 00:28:40.125 [2024-11-06T14:40:58.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.125 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:40.125 nvme0n1 : 2.00 7694.88 961.86 0.00 0.00 2075.51 1037.65 6116.69 00:28:40.125 [2024-11-06T14:40:58.108Z] =================================================================================================================== 00:28:40.125 [2024-11-06T14:40:58.108Z] Total : 7694.88 961.86 0.00 0.00 2075.51 1037.65 6116.69 00:28:40.125 { 00:28:40.125 "results": [ 00:28:40.125 { 00:28:40.125 "job": "nvme0n1", 00:28:40.125 "core_mask": "0x2", 00:28:40.125 "workload": "randwrite", 00:28:40.125 "status": "finished", 00:28:40.125 "queue_depth": 16, 00:28:40.125 "io_size": 131072, 00:28:40.125 "runtime": 2.00276, 00:28:40.125 "iops": 7694.8810641314985, 00:28:40.125 "mibps": 961.8601330164373, 00:28:40.125 "io_failed": 0, 00:28:40.125 "io_timeout": 0, 00:28:40.125 "avg_latency_us": 2075.511541972184, 00:28:40.125 "min_latency_us": 1037.6533333333334, 00:28:40.125 "max_latency_us": 6116.693333333334 00:28:40.125 } 00:28:40.125 ], 00:28:40.125 "core_count": 1 00:28:40.125 } 00:28:40.125 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:40.125 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:40.125 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:40.125 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:40.125 | select(.opcode=="crc32c") 00:28:40.125 | "\(.module_name) \(.executed)"' 00:28:40.125 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:40.386 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:40.386 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:40.386 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:40.386 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:40.386 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3951494 00:28:40.386 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3951494 ']' 00:28:40.386 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3951494 00:28:40.386 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:40.386 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:40.386 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3951494 00:28:40.386 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:40.386 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:40.386 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3951494' 00:28:40.386 killing process with pid 3951494 00:28:40.386 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3951494 00:28:40.386 Received shutdown signal, test time was about 2.000000 seconds 00:28:40.386 00:28:40.386 Latency(us) 00:28:40.386 [2024-11-06T14:40:58.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.386 [2024-11-06T14:40:58.369Z] =================================================================================================================== 00:28:40.386 [2024-11-06T14:40:58.369Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:40.386 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3951494 00:28:40.646 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3948949 00:28:40.646 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3948949 ']' 00:28:40.646 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3948949 00:28:40.646 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:40.646 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:40.646 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3948949 00:28:40.646 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:40.646 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:40.646 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3948949' 00:28:40.646 killing process with pid 3948949 00:28:40.646 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3948949 00:28:40.646 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3948949 00:28:40.646 00:28:40.646 real 0m16.965s 00:28:40.646 user 0m33.264s 00:28:40.646 sys 0m3.909s 00:28:40.646 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:40.646 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:40.646 ************************************ 00:28:40.646 END TEST nvmf_digest_clean 00:28:40.646 ************************************ 00:28:40.647 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:40.647 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:40.647 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:40.647 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:40.907 ************************************ 00:28:40.907 START TEST nvmf_digest_error 00:28:40.907 ************************************ 00:28:40.907 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:28:40.907 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:40.907 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:40.907 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:40.907 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.907 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3952304 00:28:40.907 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3952304 00:28:40.907 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:40.907 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3952304 ']' 00:28:40.907 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.907 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:40.907 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.907 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:40.907 15:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.907 [2024-11-06 15:40:58.736408] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:28:40.907 [2024-11-06 15:40:58.736466] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:40.907 [2024-11-06 15:40:58.830512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.907 [2024-11-06 15:40:58.863647] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:40.907 [2024-11-06 15:40:58.863678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:40.907 [2024-11-06 15:40:58.863684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:40.907 [2024-11-06 15:40:58.863689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:40.907 [2024-11-06 15:40:58.863693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:40.907 [2024-11-06 15:40:58.864206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.850 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:41.850 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:41.850 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:41.850 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:41.850 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:41.851 [2024-11-06 15:40:59.558117] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:41.851 null0 00:28:41.851 [2024-11-06 15:40:59.637749] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.851 [2024-11-06 15:40:59.661941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3952503 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3952503 /var/tmp/bperf.sock 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3952503 ']' 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:41.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:41.851 15:40:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:41.851 [2024-11-06 15:40:59.718559] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:28:41.851 [2024-11-06 15:40:59.718607] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3952503 ] 00:28:41.851 [2024-11-06 15:40:59.799655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.851 [2024-11-06 15:40:59.829443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.792 15:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:42.792 15:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:42.792 15:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:42.792 15:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:42.792 15:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:42.792 15:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.792 15:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:42.792 15:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.792 15:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.793 15:41:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:43.363 nvme0n1 00:28:43.364 15:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:43.364 15:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.364 15:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.364 15:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.364 15:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:43.364 15:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:43.364 Running I/O for 2 seconds... 00:28:43.364 [2024-11-06 15:41:01.216952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.364 [2024-11-06 15:41:01.216984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.364 [2024-11-06 15:41:01.216994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.364 [2024-11-06 15:41:01.227902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.364 [2024-11-06 15:41:01.227923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.364 [2024-11-06 15:41:01.227930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.364 [2024-11-06 15:41:01.237044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.364 [2024-11-06 15:41:01.237062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.364 [2024-11-06 15:41:01.237068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.364 [2024-11-06 15:41:01.245591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.364 [2024-11-06 15:41:01.245609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.364 [2024-11-06 15:41:01.245616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.364 [2024-11-06 15:41:01.254877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.364 [2024-11-06 15:41:01.254894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.364 [2024-11-06 15:41:01.254901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.364 [2024-11-06 15:41:01.263874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.364 [2024-11-06 15:41:01.263891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.364 [2024-11-06 15:41:01.263898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.364 [2024-11-06 15:41:01.272512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.364 [2024-11-06 15:41:01.272531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.364 [2024-11-06 15:41:01.272538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.364 [2024-11-06 15:41:01.281350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.364 [2024-11-06 15:41:01.281367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.364 [2024-11-06 15:41:01.281373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.364 [2024-11-06 15:41:01.290296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.364 [2024-11-06 15:41:01.290313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.364 [2024-11-06 15:41:01.290319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.364 [2024-11-06 15:41:01.298912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.364 [2024-11-06 15:41:01.298929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.364 [2024-11-06 15:41:01.298936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.364 [2024-11-06 15:41:01.308061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.364 [2024-11-06 15:41:01.308077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.364 [2024-11-06 15:41:01.308084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.364 [2024-11-06 15:41:01.317385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.364 [2024-11-06 15:41:01.317402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.364 [2024-11-06 15:41:01.317408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.364 [2024-11-06 15:41:01.327305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.364 [2024-11-06 15:41:01.327322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.364 [2024-11-06 15:41:01.327329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.364 [2024-11-06 15:41:01.334634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.364 [2024-11-06 15:41:01.334651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.364 [2024-11-06 15:41:01.334657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.364 [2024-11-06 15:41:01.344197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.364 [2024-11-06 15:41:01.344214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.364 [2024-11-06 15:41:01.344220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.354481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.354498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.354504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.362973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.362990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.363000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.372379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.372395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.372401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.380834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.380851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.380857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.389996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.390012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.390018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.398772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.398789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.398795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.406670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.406687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.406693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.416320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.416336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.416343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.425353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.425370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.425376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.434628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.434646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.434652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.444083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.444104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.444110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.453390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.453406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.453412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.462249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.462266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.462272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.471233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.471249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.471255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.479914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.479932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.479938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.490620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.490636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.490643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.499895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.499913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.499919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.508507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.508524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.508531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.517264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.517280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.517287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.526846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.526863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.526870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.535624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.535641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.535647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.544418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.544434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.544440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.552643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.552659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.552665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.561610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.561627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.561634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.570109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.570126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.570132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.579420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.626 [2024-11-06 15:41:01.579437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.626 [2024-11-06 15:41:01.579443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.626 [2024-11-06 15:41:01.588768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.627 [2024-11-06 15:41:01.588785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.627 [2024-11-06 15:41:01.588791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.627 [2024-11-06 15:41:01.597795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.627 [2024-11-06 15:41:01.597811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.627 [2024-11-06 15:41:01.597821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.627 [2024-11-06 15:41:01.606502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.627 [2024-11-06 15:41:01.606519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.627 [2024-11-06 15:41:01.606525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.888 [2024-11-06 15:41:01.614968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.888 [2024-11-06 15:41:01.614986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.888 [2024-11-06 15:41:01.614993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.888 [2024-11-06 15:41:01.624997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.888 [2024-11-06 15:41:01.625015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.888 [2024-11-06 15:41:01.625021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.888 [2024-11-06 15:41:01.633733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.888 [2024-11-06 15:41:01.633754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.888 [2024-11-06 15:41:01.633761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.888 [2024-11-06 15:41:01.642069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.888 [2024-11-06 15:41:01.642086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.888 [2024-11-06 15:41:01.642092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.888 [2024-11-06 15:41:01.651237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.888 [2024-11-06 15:41:01.651254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.888 [2024-11-06 15:41:01.651260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.888 [2024-11-06 15:41:01.659971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.888 [2024-11-06 15:41:01.659987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.888 [2024-11-06 15:41:01.659994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.888 [2024-11-06 15:41:01.669752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.888 [2024-11-06 15:41:01.669768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.888 [2024-11-06 15:41:01.669774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.888 [2024-11-06 15:41:01.678077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.888 [2024-11-06 15:41:01.678094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.888 [2024-11-06 15:41:01.678100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.888 [2024-11-06 15:41:01.687469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.888 [2024-11-06 15:41:01.687486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.888 [2024-11-06 15:41:01.687493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.888 [2024-11-06 15:41:01.697304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.888 [2024-11-06 15:41:01.697321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.888 [2024-11-06 15:41:01.697327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.888 [2024-11-06 15:41:01.707004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.888 [2024-11-06 15:41:01.707021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.888 [2024-11-06 15:41:01.707027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.888 [2024-11-06 15:41:01.715603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.888 [2024-11-06 15:41:01.715619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.888 [2024-11-06 15:41:01.715626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.888 [2024-11-06 15:41:01.725267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.888 [2024-11-06 15:41:01.725284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.888 [2024-11-06 15:41:01.725290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.888 [2024-11-06 15:41:01.735089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.888 [2024-11-06 15:41:01.735106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.888 [2024-11-06 15:41:01.735112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.888 [2024-11-06 15:41:01.743664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.888 [2024-11-06 15:41:01.743680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.888 [2024-11-06 15:41:01.743687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.888 [2024-11-06 15:41:01.751956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.888 [2024-11-06 15:41:01.751973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.888 [2024-11-06 15:41:01.751983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.888 [2024-11-06 15:41:01.761003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.888 [2024-11-06 15:41:01.761020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.888 [2024-11-06 15:41:01.761026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.888 [2024-11-06 15:41:01.769012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.888 [2024-11-06 15:41:01.769028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.888 [2024-11-06 15:41:01.769034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.888 [2024-11-06 15:41:01.778341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.888 [2024-11-06 15:41:01.778358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.888 [2024-11-06 15:41:01.778364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.888 [2024-11-06 15:41:01.788384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.888 [2024-11-06 15:41:01.788401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.888 [2024-11-06 15:41:01.788408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.889 [2024-11-06 15:41:01.797003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.889 [2024-11-06 15:41:01.797019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.889 [2024-11-06 15:41:01.797025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.889 [2024-11-06 15:41:01.805549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.889 [2024-11-06 15:41:01.805566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.889 [2024-11-06 15:41:01.805573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.889 [2024-11-06 15:41:01.815046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.889 [2024-11-06 15:41:01.815062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.889 [2024-11-06 15:41:01.815068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.889 [2024-11-06 15:41:01.824590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.889 [2024-11-06 15:41:01.824606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.889 [2024-11-06 15:41:01.824612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.889 [2024-11-06 15:41:01.833920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.889 [2024-11-06 15:41:01.833940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.889 [2024-11-06 15:41:01.833946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.889 [2024-11-06 15:41:01.841543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.889 [2024-11-06 15:41:01.841560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.889 [2024-11-06 15:41:01.841566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.889 [2024-11-06 15:41:01.851604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.889 [2024-11-06 15:41:01.851621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.889 [2024-11-06 15:41:01.851627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.889 [2024-11-06 15:41:01.861889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:43.889 [2024-11-06 15:41:01.861905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.889 [2024-11-06 15:41:01.861911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:01.871388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:01.871404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:01.871412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:01.883384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:01.883400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:01.883407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:01.894964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:01.894981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:01.894987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:01.905601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:01.905618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:01.905624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:01.913407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:01.913424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:01.913430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:01.923297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:01.923313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:01.923320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:01.933772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:01.933789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:01.933795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:01.944389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:01.944405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:01.944411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:01.951866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:01.951882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:01.951888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:01.961123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:01.961139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:01.961145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:01.971311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:01.971328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:01.971334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:01.980231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:01.980247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:01.980254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:01.988070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:01.988086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:01.988093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:01.997059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:01.997075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:01.997084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:02.009675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:02.009692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:02.009698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:02.021453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:02.021469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:02.021475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:02.030515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:02.030531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:02.030537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:02.039236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:02.039253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:02.039259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:02.047244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:02.047260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:02.047266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:02.056871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:02.056887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:02.056894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:02.065303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:02.065319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:02.065325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:02.076102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:02.076118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:02.076124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:02.087088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:02.087104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:02.087110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:02.095278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:02.095295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:02.095301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:02.104762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:02.104778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:02.104785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:02.113687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:02.113704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.156 [2024-11-06 15:41:02.113710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.156 [2024-11-06 15:41:02.123791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.156 [2024-11-06 15:41:02.123814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.157 [2024-11-06 15:41:02.123820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.455 [2024-11-06 15:41:02.134858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.455 [2024-11-06 15:41:02.134875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.455 [2024-11-06 15:41:02.134881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.455 [2024-11-06 15:41:02.146148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.455 [2024-11-06 15:41:02.146165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.455 [2024-11-06 15:41:02.146171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.455 [2024-11-06 15:41:02.154284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.455 [2024-11-06 15:41:02.154301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.455 [2024-11-06 15:41:02.154307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.455 [2024-11-06 15:41:02.163721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.455 [2024-11-06 15:41:02.163737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.455 [2024-11-06 15:41:02.163751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.455 [2024-11-06 15:41:02.172891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.455 [2024-11-06 15:41:02.172907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.455 [2024-11-06 15:41:02.172913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.455 [2024-11-06 15:41:02.181474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.455 [2024-11-06 15:41:02.181491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.455 [2024-11-06 15:41:02.181497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.455 [2024-11-06 15:41:02.192548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.455 [2024-11-06 15:41:02.192564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.455 [2024-11-06 15:41:02.192571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.455 [2024-11-06 15:41:02.201346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.455 [2024-11-06 15:41:02.201363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.455 [2024-11-06 15:41:02.201369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.455 27400.00 IOPS, 107.03 MiB/s [2024-11-06T14:41:02.438Z] [2024-11-06 15:41:02.210486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.455 [2024-11-06 15:41:02.210503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.455 [2024-11-06 15:41:02.210509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.455 [2024-11-06 15:41:02.219491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.455 [2024-11-06 15:41:02.219508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.455 [2024-11-06 15:41:02.219515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.455 [2024-11-06 15:41:02.229283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.455 [2024-11-06 15:41:02.229300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.455 [2024-11-06 15:41:02.229306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.455 [2024-11-06 15:41:02.237185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.455 [2024-11-06 15:41:02.237201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.455 [2024-11-06 15:41:02.237208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.455 [2024-11-06 15:41:02.246973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.455 [2024-11-06 15:41:02.246993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.455 [2024-11-06 15:41:02.247000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.455 [2024-11-06 15:41:02.255700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.455 [2024-11-06 15:41:02.255717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.455 [2024-11-06 15:41:02.255723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.455 [2024-11-06 15:41:02.264530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.455 [2024-11-06 15:41:02.264547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.455 [2024-11-06 15:41:02.264553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.455 [2024-11-06 15:41:02.273568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.456 [2024-11-06 15:41:02.273585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.456 [2024-11-06 15:41:02.273591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.456 [2024-11-06 15:41:02.281639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.456 [2024-11-06 15:41:02.281656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.456 [2024-11-06 15:41:02.281662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.456 [2024-11-06 15:41:02.291071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.456 [2024-11-06 15:41:02.291087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.456 [2024-11-06 15:41:02.291093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.456 [2024-11-06 15:41:02.298701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.456 [2024-11-06 15:41:02.298717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.456 [2024-11-06 15:41:02.298723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.456 [2024-11-06 15:41:02.307993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.456 [2024-11-06 15:41:02.308009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.456 [2024-11-06 15:41:02.308016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.456 [2024-11-06 15:41:02.317495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.456 [2024-11-06 15:41:02.317512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.456 [2024-11-06 15:41:02.317518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.456 [2024-11-06 15:41:02.328308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.456 [2024-11-06 15:41:02.328324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.456 [2024-11-06 15:41:02.328330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.456 [2024-11-06 15:41:02.336523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.456 [2024-11-06 15:41:02.336540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.456 [2024-11-06 15:41:02.336546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.456 [2024-11-06 15:41:02.345350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.456 [2024-11-06 15:41:02.345367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.456 [2024-11-06 15:41:02.345373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.456 [2024-11-06 15:41:02.354242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.456 [2024-11-06 15:41:02.354259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.456 [2024-11-06 15:41:02.354265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.456 [2024-11-06 15:41:02.362591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.456 [2024-11-06 15:41:02.362608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.456 [2024-11-06 15:41:02.362614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.456 [2024-11-06 15:41:02.371640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.456 [2024-11-06 15:41:02.371657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.456 [2024-11-06 15:41:02.371663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.456 [2024-11-06 15:41:02.381262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.456 [2024-11-06 15:41:02.381278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.456 [2024-11-06 15:41:02.381284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.456 [2024-11-06 15:41:02.391652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.456 [2024-11-06 15:41:02.391668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.456 [2024-11-06 15:41:02.391674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.456 [2024-11-06 15:41:02.400167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.456 [2024-11-06 15:41:02.400183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.456 [2024-11-06 15:41:02.400193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.456 [2024-11-06 15:41:02.409586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.456 [2024-11-06 15:41:02.409603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.456 [2024-11-06 15:41:02.409609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.456 [2024-11-06 15:41:02.419148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.456 [2024-11-06 15:41:02.419164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.456 [2024-11-06 15:41:02.419169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.456 [2024-11-06 15:41:02.428154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.456 [2024-11-06 15:41:02.428170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.456 [2024-11-06 15:41:02.428176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.749 [2024-11-06 15:41:02.437526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.749 [2024-11-06 15:41:02.437543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.749 [2024-11-06 15:41:02.437549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.749 [2024-11-06 15:41:02.445744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.749 [2024-11-06 15:41:02.445764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.749 [2024-11-06 15:41:02.445770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.749 [2024-11-06 15:41:02.455335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.749 [2024-11-06 15:41:02.455352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.749 [2024-11-06 15:41:02.455358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.749 [2024-11-06 15:41:02.463716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.749 [2024-11-06 15:41:02.463732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.749 [2024-11-06 15:41:02.463738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.749 [2024-11-06 15:41:02.472834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.749 [2024-11-06 15:41:02.472850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.749 [2024-11-06 15:41:02.472856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.482664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.482681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.482687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.491653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.491669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.491675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.500551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.500568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.500574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.508217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.508234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.508240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.517352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.517368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.517374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.528388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.528405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.528411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.537140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.537157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.537163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.544873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.544889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.544896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.557131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.557147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.557157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.567676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.567693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.567699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.575609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.575625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.575632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.585933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.585950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.585956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.594768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.594785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.594791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.603886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.603902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.603908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.612068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.612085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.612091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.620805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.620821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.620827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.629482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.629498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.629504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.638749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.638768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.638774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.647062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.647078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.647085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.655848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.655864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.655870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.665125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.665142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.665148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.673712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.673729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.673735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.682983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.683000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.683006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.692552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.692569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.692575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.701092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.701109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.701115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.710112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.710128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.710134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.750 [2024-11-06 15:41:02.718482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:44.750 [2024-11-06 15:41:02.718498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.750 [2024-11-06 15:41:02.718504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.057 [2024-11-06 15:41:02.727588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.057 [2024-11-06 15:41:02.727604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.057 [2024-11-06 15:41:02.727611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.057 [2024-11-06 15:41:02.736403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.057 [2024-11-06 15:41:02.736419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.057 [2024-11-06 15:41:02.736425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.057 [2024-11-06 15:41:02.744826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.057 [2024-11-06 15:41:02.744842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.057 [2024-11-06 15:41:02.744849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.754357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.754375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.754381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.762523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.762540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.762546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.771910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.771927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.771933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.780304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.780321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.780327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.788969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.788985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.788995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.798327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.798344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.798351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.807134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.807151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.807157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.815688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.815705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.815712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.825070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.825088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.825094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.834851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.834868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.834874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.843940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.843957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.843963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.851658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.851675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.851681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.860636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.860653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.860659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.869336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.869353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.869359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.878485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.878502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.878508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.887755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.887772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.887778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.897561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.897579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.897585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.905409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.905426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.905432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.914925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.914942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.914948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.923608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.923625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.923631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.933205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.933222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.933228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.941900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.941916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.941925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.950007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.950023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.950030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.959031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.959048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.959054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.968232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.968250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.968256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.977032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.977049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.058 [2024-11-06 15:41:02.977055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.058 [2024-11-06 15:41:02.986075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.058 [2024-11-06 15:41:02.986092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.059 [2024-11-06 15:41:02.986098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.059 [2024-11-06 15:41:02.994509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.059 [2024-11-06 15:41:02.994526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.059 [2024-11-06 15:41:02.994532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.059 [2024-11-06 15:41:03.003226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.059 [2024-11-06 15:41:03.003243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.059 [2024-11-06 15:41:03.003250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.059 [2024-11-06 15:41:03.012462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.059 [2024-11-06 15:41:03.012479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.059 [2024-11-06 15:41:03.012485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.059 [2024-11-06 15:41:03.021606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.059 [2024-11-06 15:41:03.021625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.059 [2024-11-06 15:41:03.021632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.059 [2024-11-06 15:41:03.031463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.059 [2024-11-06 15:41:03.031479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.059 [2024-11-06 15:41:03.031486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.320 [2024-11-06 15:41:03.039890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.320 [2024-11-06 15:41:03.039907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.320 [2024-11-06 15:41:03.039913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.320 [2024-11-06 15:41:03.049810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.320 [2024-11-06 15:41:03.049827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.320 [2024-11-06 15:41:03.049833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.320 [2024-11-06 15:41:03.059145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.320 [2024-11-06 15:41:03.059162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.320 [2024-11-06 15:41:03.059168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.320 [2024-11-06 15:41:03.067842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.320 [2024-11-06 15:41:03.067859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.320 [2024-11-06 15:41:03.067865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.320 [2024-11-06 15:41:03.076800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.320 [2024-11-06 15:41:03.076817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.320 [2024-11-06 15:41:03.076823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.320 [2024-11-06 15:41:03.085919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.320 [2024-11-06 15:41:03.085935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.320 [2024-11-06 15:41:03.085941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.320 [2024-11-06 15:41:03.093932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.320 [2024-11-06 15:41:03.093948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.320 [2024-11-06 15:41:03.093955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.320 [2024-11-06 15:41:03.102113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.320 [2024-11-06 15:41:03.102131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.320 [2024-11-06 15:41:03.102137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.320 [2024-11-06 15:41:03.111702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.320 [2024-11-06 15:41:03.111719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.321 [2024-11-06 15:41:03.111725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.321 [2024-11-06 15:41:03.121013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.321 [2024-11-06 15:41:03.121030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.321 [2024-11-06 15:41:03.121036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.321 [2024-11-06 15:41:03.128834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.321 [2024-11-06 15:41:03.128851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.321 [2024-11-06 15:41:03.128857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.321 [2024-11-06 15:41:03.137878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.321 [2024-11-06 15:41:03.137895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.321 [2024-11-06 15:41:03.137901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.321 [2024-11-06 15:41:03.148929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.321 [2024-11-06 15:41:03.148945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.321 [2024-11-06 15:41:03.148952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.321 [2024-11-06 15:41:03.157932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.321 [2024-11-06 15:41:03.157949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.321 [2024-11-06 15:41:03.157955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.321 [2024-11-06 15:41:03.167154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.321 [2024-11-06 15:41:03.167171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.321 [2024-11-06 15:41:03.167177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.321 [2024-11-06 15:41:03.176087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.321 [2024-11-06 15:41:03.176103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.321 [2024-11-06 15:41:03.176112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.321 [2024-11-06 15:41:03.184638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.321 [2024-11-06 15:41:03.184654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.321 [2024-11-06 15:41:03.184660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.321 [2024-11-06 15:41:03.193081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.321 [2024-11-06 15:41:03.193098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.321 [2024-11-06 15:41:03.193104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.321 [2024-11-06 15:41:03.203259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115b140) 00:28:45.321 [2024-11-06 15:41:03.203275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.321 [2024-11-06 15:41:03.203281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.321 27855.50 IOPS, 108.81 MiB/s 00:28:45.321 Latency(us) 00:28:45.321 [2024-11-06T14:41:03.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.321 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:45.321 nvme0n1 : 2.00 27873.45 108.88 0.00 0.00 4587.81 2266.45 12888.75 00:28:45.321 [2024-11-06T14:41:03.304Z] =================================================================================================================== 00:28:45.321 [2024-11-06T14:41:03.304Z] Total : 27873.45 108.88 0.00 0.00 4587.81 2266.45 12888.75 00:28:45.321 { 00:28:45.321 "results": [ 00:28:45.321 { 00:28:45.321 "job": "nvme0n1", 00:28:45.321 "core_mask": "0x2", 00:28:45.321 "workload": "randread", 00:28:45.321 "status": "finished", 00:28:45.321 "queue_depth": 128, 00:28:45.321 "io_size": 4096, 00:28:45.321 "runtime": 2.003304, 00:28:45.321 "iops": 27873.45305555223, 00:28:45.321 "mibps": 108.8806759982509, 00:28:45.321 "io_failed": 0, 00:28:45.321 "io_timeout": 0, 00:28:45.321 "avg_latency_us": 4587.808280234245, 00:28:45.321 "min_latency_us": 2266.4533333333334, 00:28:45.321 "max_latency_us": 12888.746666666666 00:28:45.321 } 00:28:45.321 ], 00:28:45.321 "core_count": 1 00:28:45.321 } 00:28:45.321 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:45.321 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:45.321 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:45.321 | .driver_specific 00:28:45.321 | .nvme_error 00:28:45.321 | .status_code 00:28:45.321 | .command_transient_transport_error' 00:28:45.321 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:45.582 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:28:45.582 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3952503 00:28:45.582 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3952503 ']' 00:28:45.582 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3952503 00:28:45.582 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:45.582 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:45.582 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3952503 00:28:45.582 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:45.582 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:45.582 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3952503' 00:28:45.582 killing process with pid 3952503 00:28:45.582 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3952503 00:28:45.582 Received shutdown signal, test time was about 2.000000 seconds 00:28:45.582 00:28:45.582 Latency(us) 00:28:45.582 [2024-11-06T14:41:03.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.582 [2024-11-06T14:41:03.566Z] =================================================================================================================== 00:28:45.583 [2024-11-06T14:41:03.566Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:45.583 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3952503 00:28:45.583 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:45.583 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:45.583 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:45.583 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:45.583 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:45.844 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3953338 00:28:45.844 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3953338 /var/tmp/bperf.sock 00:28:45.844 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3953338 ']' 00:28:45.844 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:45.844 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:45.844 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:45.844 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:45.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:45.844 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:45.844 15:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:45.844 [2024-11-06 15:41:03.612278] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:28:45.844 [2024-11-06 15:41:03.612333] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3953338 ] 00:28:45.844 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:45.844 Zero copy mechanism will not be used. 00:28:45.844 [2024-11-06 15:41:03.696153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.844 [2024-11-06 15:41:03.725373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.786 15:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:46.786 15:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:46.786 15:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:46.786 15:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:46.786 15:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:46.786 15:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.786 15:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:46.786 15:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.786 15:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:46.786 15:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.047 nvme0n1 00:28:47.047 15:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:47.047 15:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.047 15:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:47.047 15:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.047 15:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:47.047 15:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:47.047 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:47.047 Zero copy mechanism will not be used. 00:28:47.047 Running I/O for 2 seconds... 00:28:47.047 [2024-11-06 15:41:04.929564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.047 [2024-11-06 15:41:04.929597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.047 [2024-11-06 15:41:04.929607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.047 [2024-11-06 15:41:04.939104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.047 [2024-11-06 15:41:04.939126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.047 [2024-11-06 15:41:04.939133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.047 [2024-11-06 15:41:04.950205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.047 [2024-11-06 15:41:04.950224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.047 [2024-11-06 15:41:04.950231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.047 [2024-11-06 15:41:04.961055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.047 [2024-11-06 15:41:04.961073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.047 [2024-11-06 15:41:04.961080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.047 [2024-11-06 15:41:04.972280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.048 [2024-11-06 15:41:04.972298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.048 [2024-11-06 15:41:04.972304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.048 [2024-11-06 15:41:04.984222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.048 [2024-11-06 15:41:04.984239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.048 [2024-11-06 15:41:04.984245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.048 [2024-11-06 15:41:04.994257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.048 [2024-11-06 15:41:04.994275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.048 [2024-11-06 15:41:04.994281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.048 [2024-11-06 15:41:05.004881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.048 [2024-11-06 15:41:05.004899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.048 [2024-11-06 15:41:05.004905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.048 [2024-11-06 15:41:05.015809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.048 [2024-11-06 15:41:05.015826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.048 [2024-11-06 15:41:05.015832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.048 [2024-11-06 15:41:05.024256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.048 [2024-11-06 15:41:05.024273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.048 [2024-11-06 15:41:05.024279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.310 [2024-11-06 15:41:05.033853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.310 [2024-11-06 15:41:05.033870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.310 [2024-11-06 15:41:05.033877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.310 [2024-11-06 15:41:05.044279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.310 [2024-11-06 15:41:05.044296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.310 [2024-11-06 15:41:05.044303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.310 [2024-11-06 15:41:05.053670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.310 [2024-11-06 15:41:05.053687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.310 [2024-11-06 15:41:05.053698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.310 [2024-11-06 15:41:05.064675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.310 [2024-11-06 15:41:05.064692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.310 [2024-11-06 15:41:05.064698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.310 [2024-11-06 15:41:05.075272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.310 [2024-11-06 15:41:05.075290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.310 [2024-11-06 15:41:05.075296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.310 [2024-11-06 15:41:05.085060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.310 [2024-11-06 15:41:05.085078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.310 [2024-11-06 15:41:05.085084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.310 [2024-11-06 15:41:05.095171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.310 [2024-11-06 15:41:05.095188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.310 [2024-11-06 15:41:05.095195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.310 [2024-11-06 15:41:05.106144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.310 [2024-11-06 15:41:05.106162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.310 [2024-11-06 15:41:05.106168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.310 [2024-11-06 15:41:05.116856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.310 [2024-11-06 15:41:05.116873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.310 [2024-11-06 15:41:05.116879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.310 [2024-11-06 15:41:05.128098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.310 [2024-11-06 15:41:05.128116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.310 [2024-11-06 15:41:05.128122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.310 [2024-11-06 15:41:05.138058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.310 [2024-11-06 15:41:05.138075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.310 [2024-11-06 15:41:05.138081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.310 [2024-11-06 15:41:05.148076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.310 [2024-11-06 15:41:05.148097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.310 [2024-11-06 15:41:05.148103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.310 [2024-11-06 15:41:05.159454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.310 [2024-11-06 15:41:05.159472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.310 [2024-11-06 15:41:05.159478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.311 [2024-11-06 15:41:05.168674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.311 [2024-11-06 15:41:05.168691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.311 [2024-11-06 15:41:05.168697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.311 [2024-11-06 15:41:05.179297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.311 [2024-11-06 15:41:05.179314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.311 [2024-11-06 15:41:05.179321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.311 [2024-11-06 15:41:05.191519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.311 [2024-11-06 15:41:05.191537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.311 [2024-11-06 15:41:05.191544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.311 [2024-11-06 15:41:05.201348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.311 [2024-11-06 15:41:05.201365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.311 [2024-11-06 15:41:05.201371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.311 [2024-11-06 15:41:05.206941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.311 [2024-11-06 15:41:05.206958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.311 [2024-11-06 15:41:05.206965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.311 [2024-11-06 15:41:05.217722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.311 [2024-11-06 15:41:05.217739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.311 [2024-11-06 15:41:05.217750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.311 [2024-11-06 15:41:05.220935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.311 [2024-11-06 15:41:05.220952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.311 [2024-11-06 15:41:05.220958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.311 [2024-11-06 15:41:05.228152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.311 [2024-11-06 15:41:05.228169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.311 [2024-11-06 15:41:05.228176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.311 [2024-11-06 15:41:05.238548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.311 [2024-11-06 15:41:05.238565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.311 [2024-11-06 15:41:05.238571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.311 [2024-11-06 15:41:05.248115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.311 [2024-11-06 15:41:05.248133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.311 [2024-11-06 15:41:05.248139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.311 [2024-11-06 15:41:05.257305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.311 [2024-11-06 15:41:05.257323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.311 [2024-11-06 15:41:05.257329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.311 [2024-11-06 15:41:05.268691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.311 [2024-11-06 15:41:05.268708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.311 [2024-11-06 15:41:05.268714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.311 [2024-11-06 15:41:05.279151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.311 [2024-11-06 15:41:05.279168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.311 [2024-11-06 15:41:05.279174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.311 [2024-11-06 15:41:05.286288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.311 [2024-11-06 15:41:05.286305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.311 [2024-11-06 15:41:05.286311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.573 [2024-11-06 15:41:05.292403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.573 [2024-11-06 15:41:05.292420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.573 [2024-11-06 15:41:05.292426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.573 [2024-11-06 15:41:05.302591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.573 [2024-11-06 15:41:05.302612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.573 [2024-11-06 15:41:05.302618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.573 [2024-11-06 15:41:05.310850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.573 [2024-11-06 15:41:05.310868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.573 [2024-11-06 15:41:05.310874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.573 [2024-11-06 15:41:05.319289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.573 [2024-11-06 15:41:05.319307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.573 [2024-11-06 15:41:05.319313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.573 [2024-11-06 15:41:05.326636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.573 [2024-11-06 15:41:05.326654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.573 [2024-11-06 15:41:05.326660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.573 [2024-11-06 15:41:05.331846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.573 [2024-11-06 15:41:05.331863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.573 [2024-11-06 15:41:05.331869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.573 [2024-11-06 15:41:05.340947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.573 [2024-11-06 15:41:05.340964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.340970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.350367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.350385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.350391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.355101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.355118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.355124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.364233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.364250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.364256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.369927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.369944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.369950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.379081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.379098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.379104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.387663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.387680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.387686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.396921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.396938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.396944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.409017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.409033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.409040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.421289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.421305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.421311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.434155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.434172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.434178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.446658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.446675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.446681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.459115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.459132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.459142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.471708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.471725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.471731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.475184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.475201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.475207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.482952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.482970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.482976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.489770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.489788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.489794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.501331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.501349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.501355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.511835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.511853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.511859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.519503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.519521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.519527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.529982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.530000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.530006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.540509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.540530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.540537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.574 [2024-11-06 15:41:05.551178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.574 [2024-11-06 15:41:05.551196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.574 [2024-11-06 15:41:05.551202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.561171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.561189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.561196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.571370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.571389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.571395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.575658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.575676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.575682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.580693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.580711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.580717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.587143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.587161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.587167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.593179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.593197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.593203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.601277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.601296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.601302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.605760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.605778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.605784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.612521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.612539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.612545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.616886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.616903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.616909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.626821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.626838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.626844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.636390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.636408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.636414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.643743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.643765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.643771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.648084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.648101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.648107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.652604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.652622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.652628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.659307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.659325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.659334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.666974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.666992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.666998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.677270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.677288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.677294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.686830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.686848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.686854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.696030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.696048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.696054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.704877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.704894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.704900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.715018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.715036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.715042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.837 [2024-11-06 15:41:05.721876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.837 [2024-11-06 15:41:05.721894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.837 [2024-11-06 15:41:05.721900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.838 [2024-11-06 15:41:05.730885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.838 [2024-11-06 15:41:05.730903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.838 [2024-11-06 15:41:05.730909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.838 [2024-11-06 15:41:05.737933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.838 [2024-11-06 15:41:05.737951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.838 [2024-11-06 15:41:05.737957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.838 [2024-11-06 15:41:05.746728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.838 [2024-11-06 15:41:05.746752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.838 [2024-11-06 15:41:05.746759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.838 [2024-11-06 15:41:05.752370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.838 [2024-11-06 15:41:05.752388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.838 [2024-11-06 15:41:05.752394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.838 [2024-11-06 15:41:05.761500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.838 [2024-11-06 15:41:05.761517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.838 [2024-11-06 15:41:05.761523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.838 [2024-11-06 15:41:05.771403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.838 [2024-11-06 15:41:05.771422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.838 [2024-11-06 15:41:05.771428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.838 [2024-11-06 15:41:05.779345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.838 [2024-11-06 15:41:05.779363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.838 [2024-11-06 15:41:05.779369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.838 [2024-11-06 15:41:05.790396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.838 [2024-11-06 15:41:05.790414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.838 [2024-11-06 15:41:05.790420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.838 [2024-11-06 15:41:05.797624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.838 [2024-11-06 15:41:05.797641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.838 [2024-11-06 15:41:05.797647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.838 [2024-11-06 15:41:05.805639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.838 [2024-11-06 15:41:05.805657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.838 [2024-11-06 15:41:05.805667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.838 [2024-11-06 15:41:05.809832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.838 [2024-11-06 15:41:05.809849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.838 [2024-11-06 15:41:05.809855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.838 [2024-11-06 15:41:05.815847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:47.838 [2024-11-06 15:41:05.815865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.838 [2024-11-06 15:41:05.815871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.100 [2024-11-06 15:41:05.824621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.100 [2024-11-06 15:41:05.824638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.100 [2024-11-06 15:41:05.824644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.100 [2024-11-06 15:41:05.831922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.100 [2024-11-06 15:41:05.831940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.100 [2024-11-06 15:41:05.831946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.100 [2024-11-06 15:41:05.840951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.100 [2024-11-06 15:41:05.840969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.100 [2024-11-06 15:41:05.840975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.100 [2024-11-06 15:41:05.850585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.100 [2024-11-06 15:41:05.850602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.100 [2024-11-06 15:41:05.850609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.100 [2024-11-06 15:41:05.855227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.100 [2024-11-06 15:41:05.855245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.100 [2024-11-06 15:41:05.855252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.100 [2024-11-06 15:41:05.865712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.100 [2024-11-06 15:41:05.865730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.100 [2024-11-06 15:41:05.865736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.100 [2024-11-06 15:41:05.872719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.100 [2024-11-06 15:41:05.872740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.100 [2024-11-06 15:41:05.872751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.100 [2024-11-06 15:41:05.878736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.100 [2024-11-06 15:41:05.878760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.100 [2024-11-06 15:41:05.878766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.100 [2024-11-06 15:41:05.885563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.100 [2024-11-06 15:41:05.885581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.100 [2024-11-06 15:41:05.885587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.100 [2024-11-06 15:41:05.895592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.100 [2024-11-06 15:41:05.895610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.100 [2024-11-06 15:41:05.895616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.100 [2024-11-06 15:41:05.901219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.100 [2024-11-06 15:41:05.901237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.100 [2024-11-06 15:41:05.901244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.100 [2024-11-06 15:41:05.910863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.100 [2024-11-06 15:41:05.910881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.100 [2024-11-06 15:41:05.910887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.100 3532.00 IOPS, 441.50 MiB/s [2024-11-06T14:41:06.083Z] [2024-11-06 15:41:05.920605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.100 [2024-11-06 15:41:05.920623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.100 [2024-11-06 15:41:05.920630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.100 [2024-11-06 15:41:05.926026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.100 [2024-11-06 15:41:05.926043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.100 [2024-11-06 15:41:05.926050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.100 [2024-11-06 15:41:05.933369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.100 [2024-11-06 15:41:05.933386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.100 [2024-11-06 15:41:05.933392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.100 [2024-11-06 15:41:05.940521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.100 [2024-11-06 15:41:05.940539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.100 [2024-11-06 15:41:05.940545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.100 [2024-11-06 15:41:05.945626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.100 [2024-11-06 15:41:05.945643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.100 [2024-11-06 15:41:05.945649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.100 [2024-11-06 15:41:05.954999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.100 [2024-11-06 15:41:05.955017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.100 [2024-11-06 15:41:05.955023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.100 [2024-11-06 15:41:05.961136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.100 [2024-11-06 15:41:05.961153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.100 [2024-11-06 15:41:05.961159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.100 [2024-11-06 15:41:05.966090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.100 [2024-11-06 15:41:05.966108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.100 [2024-11-06 15:41:05.966114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.101 [2024-11-06 15:41:05.973219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.101 [2024-11-06 15:41:05.973236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.101 [2024-11-06 15:41:05.973242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.101 [2024-11-06 15:41:05.978009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.101 [2024-11-06 15:41:05.978027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.101 [2024-11-06 15:41:05.978033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.101 [2024-11-06 15:41:05.985995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.101 [2024-11-06 15:41:05.986013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.101 [2024-11-06 15:41:05.986020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.101 [2024-11-06 15:41:05.994500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.101 [2024-11-06 15:41:05.994518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.101 [2024-11-06 15:41:05.994527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.101 [2024-11-06 15:41:06.000179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.101 [2024-11-06 15:41:06.000197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.101 [2024-11-06 15:41:06.000203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.101 [2024-11-06 15:41:06.007531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.101 [2024-11-06 15:41:06.007548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.101 [2024-11-06 15:41:06.007554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.101 [2024-11-06 15:41:06.015127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.101 [2024-11-06 15:41:06.015145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.101 [2024-11-06 15:41:06.015152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.101 [2024-11-06 15:41:06.025856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.101 [2024-11-06 15:41:06.025873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.101 [2024-11-06 15:41:06.025879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.101 [2024-11-06 15:41:06.034682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.101 [2024-11-06 15:41:06.034700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.101 [2024-11-06 15:41:06.034706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.101 [2024-11-06 15:41:06.039177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.101 [2024-11-06 15:41:06.039195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.101 [2024-11-06 15:41:06.039201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.101 [2024-11-06 15:41:06.043462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.101 [2024-11-06 15:41:06.043479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.101 [2024-11-06 15:41:06.043486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.101 [2024-11-06 15:41:06.047796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.101 [2024-11-06 15:41:06.047813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.101 [2024-11-06 15:41:06.047819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.101 [2024-11-06 15:41:06.054176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.101 [2024-11-06 15:41:06.054193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.101 [2024-11-06 15:41:06.054199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.101 [2024-11-06 15:41:06.061162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.101 [2024-11-06 15:41:06.061180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.101 [2024-11-06 15:41:06.061186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.101 [2024-11-06 15:41:06.067742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.101 [2024-11-06 15:41:06.067764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.101 [2024-11-06 15:41:06.067770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.101 [2024-11-06 15:41:06.074459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.101 [2024-11-06 15:41:06.074477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.101 [2024-11-06 15:41:06.074482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.081080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.081098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.081105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.085415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.085432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.085438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.090590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.090607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.090613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.096108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.096125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.096132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.105154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.105172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.105181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.111198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.111215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.111222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.117577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.117594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.117600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.122056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.122074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.122080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.131351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.131368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.131374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.139926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.139944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.139950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.144614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.144633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.144639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.149085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.149102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.149108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.157022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.157040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.157046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.163404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.163425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.163431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.169439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.169457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.169463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.173761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.173778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.173785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.178131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.178148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.178153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.185353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.185370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.185376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.193076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.193094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.193100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.200810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.200828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.200834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.208693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.208711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.208717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.218701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.218718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.218725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.227322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.364 [2024-11-06 15:41:06.227339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.364 [2024-11-06 15:41:06.227346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.364 [2024-11-06 15:41:06.231844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.365 [2024-11-06 15:41:06.231862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.365 [2024-11-06 15:41:06.231868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.365 [2024-11-06 15:41:06.238335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.365 [2024-11-06 15:41:06.238352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.365 [2024-11-06 15:41:06.238359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.365 [2024-11-06 15:41:06.249284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.365 [2024-11-06 15:41:06.249302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.365 [2024-11-06 15:41:06.249308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.365 [2024-11-06 15:41:06.260919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.365 [2024-11-06 15:41:06.260937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.365 [2024-11-06 15:41:06.260943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.365 [2024-11-06 15:41:06.272208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.365 [2024-11-06 15:41:06.272226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.365 [2024-11-06 15:41:06.272232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.365 [2024-11-06 15:41:06.280177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.365 [2024-11-06 15:41:06.280194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.365 [2024-11-06 15:41:06.280200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.365 [2024-11-06 15:41:06.289754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.365 [2024-11-06 15:41:06.289772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.365 [2024-11-06 15:41:06.289778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.365 [2024-11-06 15:41:06.301030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.365 [2024-11-06 15:41:06.301047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.365 [2024-11-06 15:41:06.301057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.365 [2024-11-06 15:41:06.314006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.365 [2024-11-06 15:41:06.314024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.365 [2024-11-06 15:41:06.314031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.365 [2024-11-06 15:41:06.324939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.365 [2024-11-06 15:41:06.324957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.365 [2024-11-06 15:41:06.324963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.365 [2024-11-06 15:41:06.330461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.365 [2024-11-06 15:41:06.330479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.365 [2024-11-06 15:41:06.330485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.365 [2024-11-06 15:41:06.335338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.365 [2024-11-06 15:41:06.335355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.365 [2024-11-06 15:41:06.335361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.365 [2024-11-06 15:41:06.339806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.365 [2024-11-06 15:41:06.339823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.365 [2024-11-06 15:41:06.339830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.627 [2024-11-06 15:41:06.347463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.627 [2024-11-06 15:41:06.347481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.627 [2024-11-06 15:41:06.347487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.627 [2024-11-06 15:41:06.352680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.627 [2024-11-06 15:41:06.352699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.627 [2024-11-06 15:41:06.352706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.627 [2024-11-06 15:41:06.359626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.627 [2024-11-06 15:41:06.359644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.627 [2024-11-06 15:41:06.359650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.627 [2024-11-06 15:41:06.371169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.371191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.371197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.382848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.382866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.382872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.389774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.389793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.389799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.398753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.398771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.398777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.403569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.403587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.403594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.408699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.408718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.408724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.416174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.416193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.416199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.425838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.425857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.425863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.431695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.431713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.431719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.436784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.436801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.436808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.441699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.441717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.441723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.446890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.446907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.446914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.457035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.457053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.457059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.461985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.462003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.462009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.469727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.469750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.469757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.476811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.476829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.476835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.481759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.481777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.481784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.486137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.486155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.486164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.496659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.496677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.496683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.499944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.499961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.499967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.507987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.508005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.508011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.514891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.514910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.514916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.522896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.522914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.522921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.534014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.534032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.628 [2024-11-06 15:41:06.534038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.628 [2024-11-06 15:41:06.540064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.628 [2024-11-06 15:41:06.540083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.629 [2024-11-06 15:41:06.540089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.629 [2024-11-06 15:41:06.548537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.629 [2024-11-06 15:41:06.548555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.629 [2024-11-06 15:41:06.548562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.629 [2024-11-06 15:41:06.552874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.629 [2024-11-06 15:41:06.552892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.629 [2024-11-06 15:41:06.552898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.629 [2024-11-06 15:41:06.562766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.629 [2024-11-06 15:41:06.562784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.629 [2024-11-06 15:41:06.562790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.629 [2024-11-06 15:41:06.570909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.629 [2024-11-06 15:41:06.570927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.629 [2024-11-06 15:41:06.570934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.629 [2024-11-06 15:41:06.576561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.629 [2024-11-06 15:41:06.576581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.629 [2024-11-06 15:41:06.576587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.629 [2024-11-06 15:41:06.581390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.629 [2024-11-06 15:41:06.581409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.629 [2024-11-06 15:41:06.581415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.629 [2024-11-06 15:41:06.585718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.629 [2024-11-06 15:41:06.585737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.629 [2024-11-06 15:41:06.585743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.629 [2024-11-06 15:41:06.590543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.629 [2024-11-06 15:41:06.590561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.629 [2024-11-06 15:41:06.590567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.629 [2024-11-06 15:41:06.598868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.629 [2024-11-06 15:41:06.598887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.629 [2024-11-06 15:41:06.598893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.629 [2024-11-06 15:41:06.607720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.629 [2024-11-06 15:41:06.607738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.629 [2024-11-06 15:41:06.607753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.891 [2024-11-06 15:41:06.617040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.891 [2024-11-06 15:41:06.617058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.891 [2024-11-06 15:41:06.617065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.891 [2024-11-06 15:41:06.623456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.891 [2024-11-06 15:41:06.623475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.891 [2024-11-06 15:41:06.623481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.891 [2024-11-06 15:41:06.631742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.891 [2024-11-06 15:41:06.631765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.891 [2024-11-06 15:41:06.631771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.891 [2024-11-06 15:41:06.636803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.891 [2024-11-06 15:41:06.636821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.891 [2024-11-06 15:41:06.636827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.891 [2024-11-06 15:41:06.644327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.891 [2024-11-06 15:41:06.644347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.891 [2024-11-06 15:41:06.644353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.891 [2024-11-06 15:41:06.650943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.891 [2024-11-06 15:41:06.650961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.891 [2024-11-06 15:41:06.650967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.891 [2024-11-06 15:41:06.659875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.891 [2024-11-06 15:41:06.659893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.891 [2024-11-06 15:41:06.659900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.891 [2024-11-06 15:41:06.669721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.891 [2024-11-06 15:41:06.669740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.891 [2024-11-06 15:41:06.669751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.891 [2024-11-06 15:41:06.675008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.891 [2024-11-06 15:41:06.675029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.891 [2024-11-06 15:41:06.675035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.891 [2024-11-06 15:41:06.679524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.891 [2024-11-06 15:41:06.679542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.891 [2024-11-06 15:41:06.679548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.891 [2024-11-06 15:41:06.686375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.891 [2024-11-06 15:41:06.686393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.891 [2024-11-06 15:41:06.686400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.891 [2024-11-06 15:41:06.691954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.891 [2024-11-06 15:41:06.691972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.891 [2024-11-06 15:41:06.691979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.891 [2024-11-06 15:41:06.696627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.891 [2024-11-06 15:41:06.696646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.891 [2024-11-06 15:41:06.696652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.891 [2024-11-06 15:41:06.700993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.891 [2024-11-06 15:41:06.701011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.891 [2024-11-06 15:41:06.701017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.891 [2024-11-06 15:41:06.710984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.891 [2024-11-06 15:41:06.711002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.891 [2024-11-06 15:41:06.711008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.891 [2024-11-06 15:41:06.722835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.891 [2024-11-06 15:41:06.722854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.891 [2024-11-06 15:41:06.722860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.891 [2024-11-06 15:41:06.732576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.891 [2024-11-06 15:41:06.732595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.891 [2024-11-06 15:41:06.732601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.891 [2024-11-06 15:41:06.743027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.891 [2024-11-06 15:41:06.743046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.891 [2024-11-06 15:41:06.743052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.891 [2024-11-06 15:41:06.752733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.891 [2024-11-06 15:41:06.752756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.891 [2024-11-06 15:41:06.752762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.891 [2024-11-06 15:41:06.758687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.891 [2024-11-06 15:41:06.758705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.891 [2024-11-06 15:41:06.758712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.891 [2024-11-06 15:41:06.767806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.892 [2024-11-06 15:41:06.767824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.892 [2024-11-06 15:41:06.767830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.892 [2024-11-06 15:41:06.772250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.892 [2024-11-06 15:41:06.772272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.892 [2024-11-06 15:41:06.772278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.892 [2024-11-06 15:41:06.776852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.892 [2024-11-06 15:41:06.776870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.892 [2024-11-06 15:41:06.776876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.892 [2024-11-06 15:41:06.781467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.892 [2024-11-06 15:41:06.781485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.892 [2024-11-06 15:41:06.781491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.892 [2024-11-06 15:41:06.788510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.892 [2024-11-06 15:41:06.788528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.892 [2024-11-06 15:41:06.788534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.892 [2024-11-06 15:41:06.796830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.892 [2024-11-06 15:41:06.796849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.892 [2024-11-06 15:41:06.796859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.892 [2024-11-06 15:41:06.803805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.892 [2024-11-06 15:41:06.803823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.892 [2024-11-06 15:41:06.803829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.892 [2024-11-06 15:41:06.812018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.892 [2024-11-06 15:41:06.812036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.892 [2024-11-06 15:41:06.812042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.892 [2024-11-06 15:41:06.815899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.892 [2024-11-06 15:41:06.815918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.892 [2024-11-06 15:41:06.815924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.892 [2024-11-06 15:41:06.827465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.892 [2024-11-06 15:41:06.827484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.892 [2024-11-06 15:41:06.827490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.892 [2024-11-06 15:41:06.832327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.892 [2024-11-06 15:41:06.832345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.892 [2024-11-06 15:41:06.832351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.892 [2024-11-06 15:41:06.841676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.892 [2024-11-06 15:41:06.841695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.892 [2024-11-06 15:41:06.841701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.892 [2024-11-06 15:41:06.848510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.892 [2024-11-06 15:41:06.848528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.892 [2024-11-06 15:41:06.848534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.892 [2024-11-06 15:41:06.859962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.892 [2024-11-06 15:41:06.859980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.892 [2024-11-06 15:41:06.859986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.892 [2024-11-06 15:41:06.871546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:48.892 [2024-11-06 15:41:06.871567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.892 [2024-11-06 15:41:06.871573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.153 [2024-11-06 15:41:06.882038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:49.153 [2024-11-06 15:41:06.882055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.154 [2024-11-06 15:41:06.882062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.154 [2024-11-06 15:41:06.890868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:49.154 [2024-11-06 15:41:06.890887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.154 [2024-11-06 15:41:06.890893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.154 [2024-11-06 15:41:06.898828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:49.154 [2024-11-06 15:41:06.898846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.154 [2024-11-06 15:41:06.898852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.154 [2024-11-06 15:41:06.903537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:49.154 [2024-11-06 15:41:06.903555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.154 [2024-11-06 15:41:06.903561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.154 [2024-11-06 15:41:06.908154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:49.154 [2024-11-06 15:41:06.908173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.154 [2024-11-06 15:41:06.908178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.154 [2024-11-06 15:41:06.918029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2030a60) 00:28:49.154 [2024-11-06 15:41:06.918047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.154 [2024-11-06 15:41:06.918054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.154 3911.50 IOPS, 488.94 MiB/s 00:28:49.154 Latency(us) 00:28:49.154 [2024-11-06T14:41:07.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.154 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:49.154 nvme0n1 : 2.01 3909.49 488.69 0.00 0.00 4089.63 587.09 13216.43 00:28:49.154 [2024-11-06T14:41:07.137Z] =================================================================================================================== 00:28:49.154 [2024-11-06T14:41:07.137Z] Total : 3909.49 488.69 0.00 0.00 4089.63 587.09 13216.43 00:28:49.154 { 00:28:49.154 "results": [ 00:28:49.154 { 00:28:49.154 "job": "nvme0n1", 00:28:49.154 "core_mask": "0x2", 00:28:49.154 "workload": "randread", 00:28:49.154 "status": "finished", 00:28:49.154 "queue_depth": 16, 00:28:49.154 "io_size": 131072, 00:28:49.154 "runtime": 2.00512, 00:28:49.154 "iops": 3909.491701244813, 00:28:49.154 "mibps": 488.68646265560164, 00:28:49.154 "io_failed": 0, 00:28:49.154 "io_timeout": 0, 00:28:49.154 "avg_latency_us": 4089.6279219288176, 00:28:49.154 "min_latency_us": 587.0933333333334, 00:28:49.154 "max_latency_us": 13216.426666666666 00:28:49.154 } 00:28:49.154 ], 00:28:49.154 "core_count": 1 00:28:49.154 } 00:28:49.154 15:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:49.154 15:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:49.154 15:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:49.154 | .driver_specific 00:28:49.154 | .nvme_error 00:28:49.154 | .status_code 00:28:49.154 | .command_transient_transport_error' 00:28:49.154 15:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:49.154 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 252 > 0 )) 00:28:49.154 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3953338 00:28:49.154 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3953338 ']' 00:28:49.154 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3953338 00:28:49.154 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:49.154 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:49.416 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3953338 00:28:49.416 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:49.416 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:49.416 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3953338' 00:28:49.416 killing process with pid 3953338 00:28:49.416 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3953338 00:28:49.416 Received shutdown signal, test time was about 2.000000 seconds 00:28:49.416 00:28:49.416 Latency(us) 00:28:49.416 [2024-11-06T14:41:07.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.416 [2024-11-06T14:41:07.399Z] =================================================================================================================== 00:28:49.416 [2024-11-06T14:41:07.399Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:49.416 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3953338 00:28:49.416 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:49.416 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:49.416 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:49.416 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:49.416 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:49.416 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3954024 00:28:49.416 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3954024 /var/tmp/bperf.sock 00:28:49.416 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3954024 ']' 00:28:49.416 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:49.416 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:49.416 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:49.416 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:49.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:49.416 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:49.416 15:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:49.416 [2024-11-06 15:41:07.358977] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:28:49.416 [2024-11-06 15:41:07.359032] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3954024 ] 00:28:49.678 [2024-11-06 15:41:07.443880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.678 [2024-11-06 15:41:07.471992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.248 15:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:50.248 15:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:50.248 15:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:50.248 15:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:50.509 15:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:50.509 15:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.509 15:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:50.509 15:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.509 15:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:50.509 15:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:50.770 nvme0n1 00:28:50.770 15:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:50.770 15:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.770 15:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:50.770 15:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.771 15:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:50.771 15:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:50.771 Running I/O for 2 seconds... 00:28:50.771 [2024-11-06 15:41:08.741305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef81e0 00:28:50.771 [2024-11-06 15:41:08.742025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.771 [2024-11-06 15:41:08.742053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:50.771 [2024-11-06 15:41:08.750221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee3060 00:28:50.771 [2024-11-06 15:41:08.750930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.771 [2024-11-06 15:41:08.750948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:51.033 [2024-11-06 15:41:08.758828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef3e60 00:28:51.033 [2024-11-06 15:41:08.759565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.033 [2024-11-06 15:41:08.759581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:51.033 [2024-11-06 15:41:08.766936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee0630 00:28:51.033 [2024-11-06 15:41:08.767628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.033 [2024-11-06 15:41:08.767644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:51.033 [2024-11-06 15:41:08.776330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efef90 00:28:51.033 [2024-11-06 15:41:08.777175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.033 [2024-11-06 15:41:08.777191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.033 [2024-11-06 15:41:08.784896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efb8b8 00:28:51.033 [2024-11-06 15:41:08.785728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.033 [2024-11-06 15:41:08.785748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.033 [2024-11-06 15:41:08.793555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efc998 00:28:51.033 [2024-11-06 15:41:08.794401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.033 [2024-11-06 15:41:08.794417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.033 [2024-11-06 15:41:08.802072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efdeb0 00:28:51.033 [2024-11-06 15:41:08.802910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.033 [2024-11-06 15:41:08.802926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.033 [2024-11-06 15:41:08.810586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ede470 00:28:51.033 [2024-11-06 15:41:08.811438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.033 [2024-11-06 15:41:08.811454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.033 [2024-11-06 15:41:08.819081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eedd58 00:28:51.033 [2024-11-06 15:41:08.819949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.033 [2024-11-06 15:41:08.819969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.033 [2024-11-06 15:41:08.827590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efb048 00:28:51.033 [2024-11-06 15:41:08.828441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.033 [2024-11-06 15:41:08.828457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.033 [2024-11-06 15:41:08.836104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef9f68 00:28:51.033 [2024-11-06 15:41:08.836967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.033 [2024-11-06 15:41:08.836983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.033 [2024-11-06 15:41:08.844583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef81e0 00:28:51.033 [2024-11-06 15:41:08.845437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.033 [2024-11-06 15:41:08.845453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.033 [2024-11-06 15:41:08.853081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef2d80 00:28:51.033 [2024-11-06 15:41:08.853886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.033 [2024-11-06 15:41:08.853902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.033 [2024-11-06 15:41:08.861569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef1ca0 00:28:51.033 [2024-11-06 15:41:08.862425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.033 [2024-11-06 15:41:08.862442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.033 [2024-11-06 15:41:08.870060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef0bc0 00:28:51.033 [2024-11-06 15:41:08.870908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.033 [2024-11-06 15:41:08.870925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.033 [2024-11-06 15:41:08.878551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eefae0 00:28:51.033 [2024-11-06 15:41:08.879403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.033 [2024-11-06 15:41:08.879418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.033 [2024-11-06 15:41:08.887060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efd208 00:28:51.033 [2024-11-06 15:41:08.887901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.033 [2024-11-06 15:41:08.887916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.033 [2024-11-06 15:41:08.895544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee0630 00:28:51.034 [2024-11-06 15:41:08.896392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.034 [2024-11-06 15:41:08.896411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.034 [2024-11-06 15:41:08.904022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016edf550 00:28:51.034 [2024-11-06 15:41:08.905039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.034 [2024-11-06 15:41:08.905055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.034 [2024-11-06 15:41:08.912671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee84c0 00:28:51.034 [2024-11-06 15:41:08.913522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.034 [2024-11-06 15:41:08.913538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.034 [2024-11-06 15:41:08.921158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efb480 00:28:51.034 [2024-11-06 15:41:08.922082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.034 [2024-11-06 15:41:08.922098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.034 [2024-11-06 15:41:08.929656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efc560 00:28:51.034 [2024-11-06 15:41:08.930517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.034 [2024-11-06 15:41:08.930532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.034 [2024-11-06 15:41:08.938107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efef90 00:28:51.034 [2024-11-06 15:41:08.938910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.034 [2024-11-06 15:41:08.938926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.034 [2024-11-06 15:41:08.946585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ede038 00:28:51.034 [2024-11-06 15:41:08.947402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.034 [2024-11-06 15:41:08.947417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.034 [2024-11-06 15:41:08.955073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eed920 00:28:51.034 [2024-11-06 15:41:08.955933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.034 [2024-11-06 15:41:08.955949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.034 [2024-11-06 15:41:08.963550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eeea00 00:28:51.034 [2024-11-06 15:41:08.964358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.034 [2024-11-06 15:41:08.964374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.034 [2024-11-06 15:41:08.972026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efa3a0 00:28:51.034 [2024-11-06 15:41:08.972895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.034 [2024-11-06 15:41:08.972910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.034 [2024-11-06 15:41:08.980504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef92c0 00:28:51.034 [2024-11-06 15:41:08.981354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.034 [2024-11-06 15:41:08.981370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.034 [2024-11-06 15:41:08.988991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef7da8 00:28:51.034 [2024-11-06 15:41:08.989794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.034 [2024-11-06 15:41:08.989809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.034 [2024-11-06 15:41:08.997485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef2948 00:28:51.034 [2024-11-06 15:41:08.998334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.034 [2024-11-06 15:41:08.998350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.034 [2024-11-06 15:41:09.006004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef1868 00:28:51.034 [2024-11-06 15:41:09.006806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.034 [2024-11-06 15:41:09.006822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.295 [2024-11-06 15:41:09.014484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef0788 00:28:51.296 [2024-11-06 15:41:09.015352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.015368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.022951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efe720 00:28:51.296 [2024-11-06 15:41:09.023790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.023806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.031412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efcdd0 00:28:51.296 [2024-11-06 15:41:09.032218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.032234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.039901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee01f8 00:28:51.296 [2024-11-06 15:41:09.040741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.040759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.048375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016edf118 00:28:51.296 [2024-11-06 15:41:09.049178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.049194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.056863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee88f8 00:28:51.296 [2024-11-06 15:41:09.057718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.057734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.065347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efb8b8 00:28:51.296 [2024-11-06 15:41:09.066159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.066175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.073814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efc998 00:28:51.296 [2024-11-06 15:41:09.074632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.074647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.082286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efdeb0 00:28:51.296 [2024-11-06 15:41:09.083155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.083170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.090798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ede470 00:28:51.296 [2024-11-06 15:41:09.091633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.091649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.099295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eedd58 00:28:51.296 [2024-11-06 15:41:09.100147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.100162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.107783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efb048 00:28:51.296 [2024-11-06 15:41:09.108639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.108655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.116251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef9f68 00:28:51.296 [2024-11-06 15:41:09.117062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.117081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.124705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef81e0 00:28:51.296 [2024-11-06 15:41:09.125542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.125558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.133200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef2d80 00:28:51.296 [2024-11-06 15:41:09.134007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.134023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.141668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef1ca0 00:28:51.296 [2024-11-06 15:41:09.142509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.142524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.150166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef0bc0 00:28:51.296 [2024-11-06 15:41:09.151023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.151039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.158641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eefae0 00:28:51.296 [2024-11-06 15:41:09.159483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.159499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.167112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efd208 00:28:51.296 [2024-11-06 15:41:09.168000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.168015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.175607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee0630 00:28:51.296 [2024-11-06 15:41:09.176467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.176483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.184119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016edf550 00:28:51.296 [2024-11-06 15:41:09.184957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.184973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.192588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee84c0 00:28:51.296 [2024-11-06 15:41:09.193434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.193450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.201083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efb480 00:28:51.296 [2024-11-06 15:41:09.201825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.201840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.209544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efc560 00:28:51.296 [2024-11-06 15:41:09.210389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.296 [2024-11-06 15:41:09.210405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.296 [2024-11-06 15:41:09.218013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efef90 00:28:51.297 [2024-11-06 15:41:09.218871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.297 [2024-11-06 15:41:09.218886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.297 [2024-11-06 15:41:09.226502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ede038 00:28:51.297 [2024-11-06 15:41:09.227334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.297 [2024-11-06 15:41:09.227350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.297 [2024-11-06 15:41:09.234417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee23b8 00:28:51.297 [2024-11-06 15:41:09.235207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.297 [2024-11-06 15:41:09.235223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:51.297 [2024-11-06 15:41:09.243763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee0ea0 00:28:51.297 [2024-11-06 15:41:09.244718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.297 [2024-11-06 15:41:09.244734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.297 [2024-11-06 15:41:09.252383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016edfdc0 00:28:51.297 [2024-11-06 15:41:09.253350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.297 [2024-11-06 15:41:09.253366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.297 [2024-11-06 15:41:09.260858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016edece0 00:28:51.297 [2024-11-06 15:41:09.261813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.297 [2024-11-06 15:41:09.261829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.297 [2024-11-06 15:41:09.269346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef6890 00:28:51.297 [2024-11-06 15:41:09.270317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.297 [2024-11-06 15:41:09.270333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.558 [2024-11-06 15:41:09.277858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee3d08 00:28:51.558 [2024-11-06 15:41:09.278790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.558 [2024-11-06 15:41:09.278806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.558 [2024-11-06 15:41:09.286327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee2c28 00:28:51.558 [2024-11-06 15:41:09.287288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.558 [2024-11-06 15:41:09.287304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.558 [2024-11-06 15:41:09.294840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee1b48 00:28:51.558 [2024-11-06 15:41:09.295818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.558 [2024-11-06 15:41:09.295834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.558 [2024-11-06 15:41:09.303334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee99d8 00:28:51.558 [2024-11-06 15:41:09.304305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.558 [2024-11-06 15:41:09.304320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.558 [2024-11-06 15:41:09.311793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eeaab8 00:28:51.558 [2024-11-06 15:41:09.312764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.558 [2024-11-06 15:41:09.312781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.558 [2024-11-06 15:41:09.320275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eebb98 00:28:51.558 [2024-11-06 15:41:09.321238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.558 [2024-11-06 15:41:09.321254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.558 [2024-11-06 15:41:09.328756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eecc78 00:28:51.558 [2024-11-06 15:41:09.329724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.558 [2024-11-06 15:41:09.329739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.558 [2024-11-06 15:41:09.337234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee6738 00:28:51.558 [2024-11-06 15:41:09.338192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.558 [2024-11-06 15:41:09.338211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.558 [2024-11-06 15:41:09.345753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee5658 00:28:51.558 [2024-11-06 15:41:09.346731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.558 [2024-11-06 15:41:09.346749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.354220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eef6a8 00:28:51.559 [2024-11-06 15:41:09.355190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.355206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.362719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef0ff8 00:28:51.559 [2024-11-06 15:41:09.363677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.363692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.371204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eeff18 00:28:51.559 [2024-11-06 15:41:09.372169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.372184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.379690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efd640 00:28:51.559 [2024-11-06 15:41:09.380649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.380664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.388172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee0a68 00:28:51.559 [2024-11-06 15:41:09.389144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.389160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.396633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016edf988 00:28:51.559 [2024-11-06 15:41:09.397612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.397627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.405115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef6cc8 00:28:51.559 [2024-11-06 15:41:09.406087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.406103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.413594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee4140 00:28:51.559 [2024-11-06 15:41:09.414576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.414592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.422090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee3060 00:28:51.559 [2024-11-06 15:41:09.423008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.423024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.430568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee1f80 00:28:51.559 [2024-11-06 15:41:09.431525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.431541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.439046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee95a0 00:28:51.559 [2024-11-06 15:41:09.440001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.440016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.447507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eea680 00:28:51.559 [2024-11-06 15:41:09.448474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.448490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.455995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eeb760 00:28:51.559 [2024-11-06 15:41:09.456959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.456975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.464483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eec840 00:28:51.559 [2024-11-06 15:41:09.465448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.465464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.472992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee7818 00:28:51.559 [2024-11-06 15:41:09.473966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.473982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.481468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee6300 00:28:51.559 [2024-11-06 15:41:09.482451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.482467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.489928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee5220 00:28:51.559 [2024-11-06 15:41:09.490906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.490921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.498419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef8e88 00:28:51.559 [2024-11-06 15:41:09.499381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.499396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.506900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef0350 00:28:51.559 [2024-11-06 15:41:09.507772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.507788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.515411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efda78 00:28:51.559 [2024-11-06 15:41:09.516337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.516353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.523900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee0ea0 00:28:51.559 [2024-11-06 15:41:09.524873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.524889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.559 [2024-11-06 15:41:09.532364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016edfdc0 00:28:51.559 [2024-11-06 15:41:09.533332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.559 [2024-11-06 15:41:09.533347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.820 [2024-11-06 15:41:09.540831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016edece0 00:28:51.820 [2024-11-06 15:41:09.541810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.820 [2024-11-06 15:41:09.541826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.820 [2024-11-06 15:41:09.549338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef6890 00:28:51.820 [2024-11-06 15:41:09.550320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.820 [2024-11-06 15:41:09.550336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.820 [2024-11-06 15:41:09.557814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee3d08 00:28:51.820 [2024-11-06 15:41:09.558772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.820 [2024-11-06 15:41:09.558788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.820 [2024-11-06 15:41:09.566294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee2c28 00:28:51.820 [2024-11-06 15:41:09.567258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.820 [2024-11-06 15:41:09.567274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.820 [2024-11-06 15:41:09.574767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee1b48 00:28:51.820 [2024-11-06 15:41:09.575740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.820 [2024-11-06 15:41:09.575758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.820 [2024-11-06 15:41:09.583230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee99d8 00:28:51.820 [2024-11-06 15:41:09.584191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.820 [2024-11-06 15:41:09.584206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.820 [2024-11-06 15:41:09.591701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eeaab8 00:28:51.820 [2024-11-06 15:41:09.592675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.820 [2024-11-06 15:41:09.592690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.820 [2024-11-06 15:41:09.600187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eebb98 00:28:51.820 [2024-11-06 15:41:09.601172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.820 [2024-11-06 15:41:09.601187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.820 [2024-11-06 15:41:09.608652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eecc78 00:28:51.820 [2024-11-06 15:41:09.609631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.820 [2024-11-06 15:41:09.609646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.820 [2024-11-06 15:41:09.617140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee6738 00:28:51.820 [2024-11-06 15:41:09.618099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.820 [2024-11-06 15:41:09.618115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.820 [2024-11-06 15:41:09.625590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee5658 00:28:51.820 [2024-11-06 15:41:09.626537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.820 [2024-11-06 15:41:09.626553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.820 [2024-11-06 15:41:09.634037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eef6a8 00:28:51.820 [2024-11-06 15:41:09.635005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.820 [2024-11-06 15:41:09.635023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.820 [2024-11-06 15:41:09.642523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef0ff8 00:28:51.820 [2024-11-06 15:41:09.643501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.820 [2024-11-06 15:41:09.643516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.820 [2024-11-06 15:41:09.651018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eeff18 00:28:51.820 [2024-11-06 15:41:09.651979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.820 [2024-11-06 15:41:09.651994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.820 [2024-11-06 15:41:09.659497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efd640 00:28:51.820 [2024-11-06 15:41:09.660466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.821 [2024-11-06 15:41:09.660482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.821 [2024-11-06 15:41:09.667969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee0a68 00:28:51.821 [2024-11-06 15:41:09.668954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.821 [2024-11-06 15:41:09.668970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.821 [2024-11-06 15:41:09.676424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016edf988 00:28:51.821 [2024-11-06 15:41:09.677354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.821 [2024-11-06 15:41:09.677370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.821 [2024-11-06 15:41:09.685991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef6cc8 00:28:51.821 [2024-11-06 15:41:09.687394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.821 [2024-11-06 15:41:09.687409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:51.821 [2024-11-06 15:41:09.693873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eed0b0 00:28:51.821 [2024-11-06 15:41:09.694922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.821 [2024-11-06 15:41:09.694938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:51.821 [2024-11-06 15:41:09.702256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eebfd0 00:28:51.821 [2024-11-06 15:41:09.703336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.821 [2024-11-06 15:41:09.703351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:51.821 [2024-11-06 15:41:09.710757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efe2e8 00:28:51.821 [2024-11-06 15:41:09.711808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.821 [2024-11-06 15:41:09.711823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:51.821 [2024-11-06 15:41:09.719240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ede8a8 00:28:51.821 [2024-11-06 15:41:09.720301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.821 [2024-11-06 15:41:09.720317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:51.821 [2024-11-06 15:41:09.728847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eee190 00:28:51.821 [2024-11-06 15:41:09.730393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.821 [2024-11-06 15:41:09.730408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:51.821 29989.00 IOPS, 117.14 MiB/s [2024-11-06T14:41:09.804Z] [2024-11-06 15:41:09.735907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef3e60 00:28:51.821 [2024-11-06 15:41:09.736812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.821 [2024-11-06 15:41:09.736828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:51.821 [2024-11-06 15:41:09.744973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016edece0 00:28:51.821 [2024-11-06 15:41:09.746018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.821 [2024-11-06 15:41:09.746033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:51.821 [2024-11-06 15:41:09.753471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efda78 00:28:51.821 [2024-11-06 15:41:09.754555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.821 [2024-11-06 15:41:09.754570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:51.821 [2024-11-06 15:41:09.761925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee1b48 00:28:51.821 [2024-11-06 15:41:09.762903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.821 [2024-11-06 15:41:09.762919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:51.821 [2024-11-06 15:41:09.770375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee8088 00:28:51.821 [2024-11-06 15:41:09.771417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.821 [2024-11-06 15:41:09.771433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:51.821 [2024-11-06 15:41:09.778876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eef270 00:28:51.821 [2024-11-06 15:41:09.779963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.821 [2024-11-06 15:41:09.779979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:51.821 [2024-11-06 15:41:09.787339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee1f80 00:28:51.821 [2024-11-06 15:41:09.788397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.821 [2024-11-06 15:41:09.788413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:51.821 [2024-11-06 15:41:09.795822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef6cc8 00:28:51.821 [2024-11-06 15:41:09.796899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.821 [2024-11-06 15:41:09.796914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:52.082 [2024-11-06 15:41:09.804319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efd640 00:28:52.082 [2024-11-06 15:41:09.805403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.082 [2024-11-06 15:41:09.805419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.812784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee1710 00:28:52.083 [2024-11-06 15:41:09.813874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.813889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.821243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee84c0 00:28:52.083 [2024-11-06 15:41:09.822331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.822346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.829710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eeee38 00:28:52.083 [2024-11-06 15:41:09.830796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.830812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.838184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee2c28 00:28:52.083 [2024-11-06 15:41:09.839269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.839284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.846666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016edece0 00:28:52.083 [2024-11-06 15:41:09.847750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.847766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.855117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efda78 00:28:52.083 [2024-11-06 15:41:09.856201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.856219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.863566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee1b48 00:28:52.083 [2024-11-06 15:41:09.864656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.864671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.872069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee8088 00:28:52.083 [2024-11-06 15:41:09.873169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.873184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.880546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eef270 00:28:52.083 [2024-11-06 15:41:09.881638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.881653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.888504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef3a28 00:28:52.083 [2024-11-06 15:41:09.889574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.889590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.897845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eed4e8 00:28:52.083 [2024-11-06 15:41:09.899044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.899059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.906469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef2d80 00:28:52.083 [2024-11-06 15:41:09.907684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.907700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.914971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee0ea0 00:28:52.083 [2024-11-06 15:41:09.916174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.916190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.923468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eed4e8 00:28:52.083 [2024-11-06 15:41:09.924682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.924698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.931973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef2d80 00:28:52.083 [2024-11-06 15:41:09.933168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.933184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.940477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee0ea0 00:28:52.083 [2024-11-06 15:41:09.941636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.941652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.947454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efa3a0 00:28:52.083 [2024-11-06 15:41:09.948201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.948217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.955922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef92c0 00:28:52.083 [2024-11-06 15:41:09.956618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.956633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.964400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef7da8 00:28:52.083 [2024-11-06 15:41:09.965127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.965142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.972902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef2948 00:28:52.083 [2024-11-06 15:41:09.973622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.973636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.981405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eff3c8 00:28:52.083 [2024-11-06 15:41:09.982134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.982149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.989913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee5a90 00:28:52.083 [2024-11-06 15:41:09.990633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.990648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:52.083 [2024-11-06 15:41:09.998366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee6b70 00:28:52.083 [2024-11-06 15:41:09.999171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.083 [2024-11-06 15:41:09.999186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:52.084 [2024-11-06 15:41:10.007393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eed0b0 00:28:52.084 [2024-11-06 15:41:10.008123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.084 [2024-11-06 15:41:10.008140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:52.084 [2024-11-06 15:41:10.015910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eebfd0 00:28:52.084 [2024-11-06 15:41:10.016643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.084 [2024-11-06 15:41:10.016660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:52.084 [2024-11-06 15:41:10.024775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee01f8 00:28:52.084 [2024-11-06 15:41:10.025263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.084 [2024-11-06 15:41:10.025279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:52.084 [2024-11-06 15:41:10.033515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eee5c8 00:28:52.084 [2024-11-06 15:41:10.034379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.084 [2024-11-06 15:41:10.034395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:52.084 [2024-11-06 15:41:10.041925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee3d08 00:28:52.084 [2024-11-06 15:41:10.042637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.084 [2024-11-06 15:41:10.042652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:52.084 [2024-11-06 15:41:10.050462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee1b48 00:28:52.084 [2024-11-06 15:41:10.051326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.084 [2024-11-06 15:41:10.051342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:52.084 [2024-11-06 15:41:10.059001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee9168 00:28:52.084 [2024-11-06 15:41:10.059846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.084 [2024-11-06 15:41:10.059861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:52.346 [2024-11-06 15:41:10.067486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eeaab8 00:28:52.346 [2024-11-06 15:41:10.068321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.346 [2024-11-06 15:41:10.068337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:52.346 [2024-11-06 15:41:10.075988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efda78 00:28:52.346 [2024-11-06 15:41:10.076822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.346 [2024-11-06 15:41:10.076841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:52.346 [2024-11-06 15:41:10.084478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee99d8 00:28:52.346 [2024-11-06 15:41:10.085315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.346 [2024-11-06 15:41:10.085331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:52.346 [2024-11-06 15:41:10.092410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef31b8 00:28:52.346 [2024-11-06 15:41:10.093240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.346 [2024-11-06 15:41:10.093255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:52.346 [2024-11-06 15:41:10.101737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee7c50 00:28:52.346 [2024-11-06 15:41:10.102697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.346 [2024-11-06 15:41:10.102713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:52.346 [2024-11-06 15:41:10.110225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef7da8 00:28:52.346 [2024-11-06 15:41:10.111183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.346 [2024-11-06 15:41:10.111199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:52.346 [2024-11-06 15:41:10.118699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ede8a8 00:28:52.346 [2024-11-06 15:41:10.119680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.346 [2024-11-06 15:41:10.119696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:52.346 [2024-11-06 15:41:10.127375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef7970 00:28:52.346 [2024-11-06 15:41:10.128338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.346 [2024-11-06 15:41:10.128354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:52.346 [2024-11-06 15:41:10.135874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef96f8 00:28:52.346 [2024-11-06 15:41:10.136829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.346 [2024-11-06 15:41:10.136845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:52.346 [2024-11-06 15:41:10.144342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eebfd0 00:28:52.346 [2024-11-06 15:41:10.145320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.346 [2024-11-06 15:41:10.145336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:52.346 [2024-11-06 15:41:10.152838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efb048 00:28:52.346 [2024-11-06 15:41:10.153817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.346 [2024-11-06 15:41:10.153833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:52.346 [2024-11-06 15:41:10.161327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efc998 00:28:52.346 [2024-11-06 15:41:10.162297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.346 [2024-11-06 15:41:10.162312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:52.346 [2024-11-06 15:41:10.169824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eea248 00:28:52.346 [2024-11-06 15:41:10.170781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.346 [2024-11-06 15:41:10.170797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:52.346 [2024-11-06 15:41:10.178303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee6fa8 00:28:52.346 [2024-11-06 15:41:10.179258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.346 [2024-11-06 15:41:10.179274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:52.346 [2024-11-06 15:41:10.187834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eec408 00:28:52.346 [2024-11-06 15:41:10.189249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.346 [2024-11-06 15:41:10.189264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:52.346 [2024-11-06 15:41:10.195375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef7100 00:28:52.346 [2024-11-06 15:41:10.196081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.346 [2024-11-06 15:41:10.196097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.346 [2024-11-06 15:41:10.203888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee8d30 00:28:52.346 [2024-11-06 15:41:10.204617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.346 [2024-11-06 15:41:10.204633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.346 [2024-11-06 15:41:10.212361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efd208 00:28:52.346 [2024-11-06 15:41:10.213089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.346 [2024-11-06 15:41:10.213104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.346 [2024-11-06 15:41:10.221251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef7100 00:28:52.346 [2024-11-06 15:41:10.222323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.347 [2024-11-06 15:41:10.222339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.347 [2024-11-06 15:41:10.229633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef35f0 00:28:52.347 [2024-11-06 15:41:10.230709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.347 [2024-11-06 15:41:10.230725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.347 [2024-11-06 15:41:10.238113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eee5c8 00:28:52.347 [2024-11-06 15:41:10.239177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.347 [2024-11-06 15:41:10.239193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.347 [2024-11-06 15:41:10.246616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efd640 00:28:52.347 [2024-11-06 15:41:10.247637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.347 [2024-11-06 15:41:10.247653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.347 [2024-11-06 15:41:10.255110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee9e10 00:28:52.347 [2024-11-06 15:41:10.256171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.347 [2024-11-06 15:41:10.256186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.347 [2024-11-06 15:41:10.263592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef0ff8 00:28:52.347 [2024-11-06 15:41:10.264658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.347 [2024-11-06 15:41:10.264674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.347 [2024-11-06 15:41:10.272083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef6cc8 00:28:52.347 [2024-11-06 15:41:10.273146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.347 [2024-11-06 15:41:10.273163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.347 [2024-11-06 15:41:10.280556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efb8b8 00:28:52.347 [2024-11-06 15:41:10.281632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.347 [2024-11-06 15:41:10.281647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.347 [2024-11-06 15:41:10.289051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee8d30 00:28:52.347 [2024-11-06 15:41:10.290121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.347 [2024-11-06 15:41:10.290136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.347 [2024-11-06 15:41:10.297540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef0788 00:28:52.347 [2024-11-06 15:41:10.298604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.347 [2024-11-06 15:41:10.298623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.347 [2024-11-06 15:41:10.306061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef7538 00:28:52.347 [2024-11-06 15:41:10.307118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.347 [2024-11-06 15:41:10.307134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.347 [2024-11-06 15:41:10.314549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef8618 00:28:52.347 [2024-11-06 15:41:10.315626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.347 [2024-11-06 15:41:10.315642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.347 [2024-11-06 15:41:10.323036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee4140 00:28:52.347 [2024-11-06 15:41:10.324092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.347 [2024-11-06 15:41:10.324108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.331528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efdeb0 00:28:52.609 [2024-11-06 15:41:10.332590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.609 [2024-11-06 15:41:10.332605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.340039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ede470 00:28:52.609 [2024-11-06 15:41:10.341124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.609 [2024-11-06 15:41:10.341139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.348550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef2d80 00:28:52.609 [2024-11-06 15:41:10.349614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.609 [2024-11-06 15:41:10.349629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.357062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef1ca0 00:28:52.609 [2024-11-06 15:41:10.358142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.609 [2024-11-06 15:41:10.358158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.365582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eef270 00:28:52.609 [2024-11-06 15:41:10.366646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.609 [2024-11-06 15:41:10.366661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.374048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef31b8 00:28:52.609 [2024-11-06 15:41:10.375126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.609 [2024-11-06 15:41:10.375142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.382567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef5378 00:28:52.609 [2024-11-06 15:41:10.383643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.609 [2024-11-06 15:41:10.383659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.391059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee12d8 00:28:52.609 [2024-11-06 15:41:10.392140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.609 [2024-11-06 15:41:10.392156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.399557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee0630 00:28:52.609 [2024-11-06 15:41:10.400618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.609 [2024-11-06 15:41:10.400634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.408040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee23b8 00:28:52.609 [2024-11-06 15:41:10.409081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.609 [2024-11-06 15:41:10.409096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.416496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee3d08 00:28:52.609 [2024-11-06 15:41:10.417571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.609 [2024-11-06 15:41:10.417587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.424974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee1b48 00:28:52.609 [2024-11-06 15:41:10.426026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.609 [2024-11-06 15:41:10.426041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.433469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee9168 00:28:52.609 [2024-11-06 15:41:10.434504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.609 [2024-11-06 15:41:10.434519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.441962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eeaab8 00:28:52.609 [2024-11-06 15:41:10.443075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.609 [2024-11-06 15:41:10.443091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.450481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef81e0 00:28:52.609 [2024-11-06 15:41:10.451560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.609 [2024-11-06 15:41:10.451575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.458968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee5220 00:28:52.609 [2024-11-06 15:41:10.460045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.609 [2024-11-06 15:41:10.460061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.467431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eeb760 00:28:52.609 [2024-11-06 15:41:10.468508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.609 [2024-11-06 15:41:10.468523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.475927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ede038 00:28:52.609 [2024-11-06 15:41:10.476981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.609 [2024-11-06 15:41:10.476996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.484403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eed920 00:28:52.609 [2024-11-06 15:41:10.485477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.609 [2024-11-06 15:41:10.485492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.492885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef20d8 00:28:52.609 [2024-11-06 15:41:10.493966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.609 [2024-11-06 15:41:10.493982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.501388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef7100 00:28:52.609 [2024-11-06 15:41:10.502455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.609 [2024-11-06 15:41:10.502470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.609 [2024-11-06 15:41:10.509866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef35f0 00:28:52.609 [2024-11-06 15:41:10.510949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.610 [2024-11-06 15:41:10.510964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.610 [2024-11-06 15:41:10.518361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eee5c8 00:28:52.610 [2024-11-06 15:41:10.519443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.610 [2024-11-06 15:41:10.519459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.610 [2024-11-06 15:41:10.526859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efd640 00:28:52.610 [2024-11-06 15:41:10.527900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.610 [2024-11-06 15:41:10.527916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.610 [2024-11-06 15:41:10.535359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee9e10 00:28:52.610 [2024-11-06 15:41:10.536424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.610 [2024-11-06 15:41:10.536440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.610 [2024-11-06 15:41:10.543854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef0ff8 00:28:52.610 [2024-11-06 15:41:10.544870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.610 [2024-11-06 15:41:10.544886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.610 [2024-11-06 15:41:10.552349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef6cc8 00:28:52.610 [2024-11-06 15:41:10.553427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.610 [2024-11-06 15:41:10.553442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.610 [2024-11-06 15:41:10.560824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efb8b8 00:28:52.610 [2024-11-06 15:41:10.561864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.610 [2024-11-06 15:41:10.561880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.610 [2024-11-06 15:41:10.569312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee8d30 00:28:52.610 [2024-11-06 15:41:10.570331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.610 [2024-11-06 15:41:10.570347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.610 [2024-11-06 15:41:10.577792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef0788 00:28:52.610 [2024-11-06 15:41:10.578869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.610 [2024-11-06 15:41:10.578885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.610 [2024-11-06 15:41:10.586297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef7538 00:28:52.610 [2024-11-06 15:41:10.587330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.610 [2024-11-06 15:41:10.587346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.871 [2024-11-06 15:41:10.594771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef8618 00:28:52.871 [2024-11-06 15:41:10.595809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.871 [2024-11-06 15:41:10.595828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.871 [2024-11-06 15:41:10.603244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee4140 00:28:52.871 [2024-11-06 15:41:10.604319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.871 [2024-11-06 15:41:10.604335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.871 [2024-11-06 15:41:10.611729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016efdeb0 00:28:52.871 [2024-11-06 15:41:10.612789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.871 [2024-11-06 15:41:10.612804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.871 [2024-11-06 15:41:10.620212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ede470 00:28:52.871 [2024-11-06 15:41:10.621228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.871 [2024-11-06 15:41:10.621243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.871 [2024-11-06 15:41:10.628706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef2d80 00:28:52.871 [2024-11-06 15:41:10.629752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.871 [2024-11-06 15:41:10.629767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.871 [2024-11-06 15:41:10.637203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef1ca0 00:28:52.871 [2024-11-06 15:41:10.638273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.871 [2024-11-06 15:41:10.638289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.871 [2024-11-06 15:41:10.645677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eef270 00:28:52.871 [2024-11-06 15:41:10.646750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.871 [2024-11-06 15:41:10.646765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.872 [2024-11-06 15:41:10.654147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef31b8 00:28:52.872 [2024-11-06 15:41:10.655203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.872 [2024-11-06 15:41:10.655218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.872 [2024-11-06 15:41:10.662664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef5378 00:28:52.872 [2024-11-06 15:41:10.663734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.872 [2024-11-06 15:41:10.663752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.872 [2024-11-06 15:41:10.671139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee12d8 00:28:52.872 [2024-11-06 15:41:10.672228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.872 [2024-11-06 15:41:10.672243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.872 [2024-11-06 15:41:10.679636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee0630 00:28:52.872 [2024-11-06 15:41:10.680706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.872 [2024-11-06 15:41:10.680721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.872 [2024-11-06 15:41:10.688116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee23b8 00:28:52.872 [2024-11-06 15:41:10.689194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.872 [2024-11-06 15:41:10.689209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.872 [2024-11-06 15:41:10.696575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee3d08 00:28:52.872 [2024-11-06 15:41:10.697632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.872 [2024-11-06 15:41:10.697648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.872 [2024-11-06 15:41:10.705094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee1b48 00:28:52.872 [2024-11-06 15:41:10.706157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.872 [2024-11-06 15:41:10.706173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.872 [2024-11-06 15:41:10.713573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee9168 00:28:52.872 [2024-11-06 15:41:10.714598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.872 [2024-11-06 15:41:10.714613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.872 [2024-11-06 15:41:10.722061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016eeaab8 00:28:52.872 [2024-11-06 15:41:10.723121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.872 [2024-11-06 15:41:10.723136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.872 [2024-11-06 15:41:10.730548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ef81e0 00:28:52.872 [2024-11-06 15:41:10.731589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.872 [2024-11-06 15:41:10.731603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.872 30054.50 IOPS, 117.40 MiB/s [2024-11-06T14:41:10.855Z] [2024-11-06 15:41:10.739067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01750) with pdu=0x200016ee5220 00:28:52.872 [2024-11-06 15:41:10.740106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.872 [2024-11-06 15:41:10.740121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:52.872 00:28:52.872 Latency(us) 00:28:52.872 [2024-11-06T14:41:10.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.872 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:52.872 nvme0n1 : 2.01 30051.86 117.39 0.00 0.00 4253.68 2020.69 10485.76 00:28:52.872 [2024-11-06T14:41:10.855Z] =================================================================================================================== 00:28:52.872 [2024-11-06T14:41:10.855Z] Total : 30051.86 117.39 0.00 0.00 4253.68 2020.69 10485.76 00:28:52.872 { 00:28:52.872 "results": [ 00:28:52.872 { 00:28:52.872 "job": "nvme0n1", 00:28:52.872 "core_mask": "0x2", 00:28:52.872 "workload": "randwrite", 00:28:52.872 "status": "finished", 00:28:52.872 "queue_depth": 128, 00:28:52.872 "io_size": 4096, 00:28:52.872 "runtime": 2.006598, 00:28:52.872 "iops": 30051.858917431393, 00:28:52.872 "mibps": 117.39007389621638, 00:28:52.872 "io_failed": 0, 00:28:52.872 "io_timeout": 0, 00:28:52.872 "avg_latency_us": 4253.678316915968, 00:28:52.872 "min_latency_us": 2020.6933333333334, 00:28:52.872 "max_latency_us": 10485.76 00:28:52.872 } 00:28:52.872 ], 00:28:52.872 "core_count": 1 00:28:52.872 } 00:28:52.872 15:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:52.872 15:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:52.872 15:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:52.872 | .driver_specific 00:28:52.872 | .nvme_error 00:28:52.872 | .status_code 00:28:52.872 | .command_transient_transport_error' 00:28:52.872 15:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:53.134 15:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 236 > 0 )) 00:28:53.134 15:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3954024 00:28:53.134 15:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3954024 ']' 00:28:53.134 15:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3954024 00:28:53.134 15:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:53.134 15:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:53.134 15:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3954024 00:28:53.134 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:53.134 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:53.134 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3954024' 00:28:53.134 killing process with pid 3954024 00:28:53.134 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3954024 00:28:53.134 Received shutdown signal, test time was about 2.000000 seconds 00:28:53.134 00:28:53.134 Latency(us) 00:28:53.134 [2024-11-06T14:41:11.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.134 [2024-11-06T14:41:11.117Z] =================================================================================================================== 00:28:53.134 [2024-11-06T14:41:11.117Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:53.134 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3954024 00:28:53.134 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:53.134 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:53.134 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:53.134 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:53.134 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:53.134 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3954710 00:28:53.134 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3954710 /var/tmp/bperf.sock 00:28:53.134 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3954710 ']' 00:28:53.134 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:53.134 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:53.134 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:53.394 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:53.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:53.395 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:53.395 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:53.395 [2024-11-06 15:41:11.173016] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:28:53.395 [2024-11-06 15:41:11.173090] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3954710 ] 00:28:53.395 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:53.395 Zero copy mechanism will not be used. 00:28:53.395 [2024-11-06 15:41:11.256532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.395 [2024-11-06 15:41:11.285877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.967 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:53.967 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:53.967 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:53.967 15:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:54.228 15:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:54.228 15:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.228 15:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:54.228 15:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.228 15:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.228 15:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.489 nvme0n1 00:28:54.750 15:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:54.750 15:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.750 15:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:54.750 15:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.750 15:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:54.750 15:41:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:54.750 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:54.750 Zero copy mechanism will not be used. 00:28:54.750 Running I/O for 2 seconds... 00:28:54.750 [2024-11-06 15:41:12.591055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.750 [2024-11-06 15:41:12.591279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.750 [2024-11-06 15:41:12.591305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.750 [2024-11-06 15:41:12.595021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.750 [2024-11-06 15:41:12.595220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.750 [2024-11-06 15:41:12.595239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.750 [2024-11-06 15:41:12.598630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.750 [2024-11-06 15:41:12.598830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.750 [2024-11-06 15:41:12.598848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.750 [2024-11-06 15:41:12.602299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.750 [2024-11-06 15:41:12.602492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.750 [2024-11-06 15:41:12.602509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.750 [2024-11-06 15:41:12.607468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.750 [2024-11-06 15:41:12.607667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.750 [2024-11-06 15:41:12.607685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.750 [2024-11-06 15:41:12.616263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.750 [2024-11-06 15:41:12.616583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.750 [2024-11-06 15:41:12.616601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.750 [2024-11-06 15:41:12.620905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.750 [2024-11-06 15:41:12.621097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.750 [2024-11-06 15:41:12.621114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.750 [2024-11-06 15:41:12.628317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.750 [2024-11-06 15:41:12.628495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.750 [2024-11-06 15:41:12.628516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.750 [2024-11-06 15:41:12.634315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.750 [2024-11-06 15:41:12.634579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.750 [2024-11-06 15:41:12.634596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.750 [2024-11-06 15:41:12.644853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.750 [2024-11-06 15:41:12.645034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.750 [2024-11-06 15:41:12.645051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.751 [2024-11-06 15:41:12.648394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.751 [2024-11-06 15:41:12.648572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.751 [2024-11-06 15:41:12.648589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.751 [2024-11-06 15:41:12.651902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.751 [2024-11-06 15:41:12.652079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.751 [2024-11-06 15:41:12.652095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.751 [2024-11-06 15:41:12.655407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.751 [2024-11-06 15:41:12.655574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.751 [2024-11-06 15:41:12.655592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.751 [2024-11-06 15:41:12.660141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.751 [2024-11-06 15:41:12.660404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.751 [2024-11-06 15:41:12.660422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.751 [2024-11-06 15:41:12.666042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.751 [2024-11-06 15:41:12.666210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.751 [2024-11-06 15:41:12.666228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.751 [2024-11-06 15:41:12.669398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.751 [2024-11-06 15:41:12.669565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.751 [2024-11-06 15:41:12.669581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.751 [2024-11-06 15:41:12.672714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.751 [2024-11-06 15:41:12.672883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.751 [2024-11-06 15:41:12.672899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.751 [2024-11-06 15:41:12.676146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.751 [2024-11-06 15:41:12.676313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.751 [2024-11-06 15:41:12.676329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.751 [2024-11-06 15:41:12.679735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.751 [2024-11-06 15:41:12.679912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.751 [2024-11-06 15:41:12.679930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.751 [2024-11-06 15:41:12.687467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.751 [2024-11-06 15:41:12.687637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.751 [2024-11-06 15:41:12.687654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.751 [2024-11-06 15:41:12.693582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.751 [2024-11-06 15:41:12.693900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.751 [2024-11-06 15:41:12.693918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.751 [2024-11-06 15:41:12.701490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.751 [2024-11-06 15:41:12.701858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.751 [2024-11-06 15:41:12.701876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.751 [2024-11-06 15:41:12.709633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.751 [2024-11-06 15:41:12.709800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.751 [2024-11-06 15:41:12.709817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.751 [2024-11-06 15:41:12.712861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.751 [2024-11-06 15:41:12.713025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.751 [2024-11-06 15:41:12.713042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.751 [2024-11-06 15:41:12.717273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.751 [2024-11-06 15:41:12.717436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.751 [2024-11-06 15:41:12.717458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.751 [2024-11-06 15:41:12.720658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.751 [2024-11-06 15:41:12.720825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.751 [2024-11-06 15:41:12.720841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.751 [2024-11-06 15:41:12.724705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.751 [2024-11-06 15:41:12.724870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.751 [2024-11-06 15:41:12.724886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.751 [2024-11-06 15:41:12.730957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:54.751 [2024-11-06 15:41:12.731114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.751 [2024-11-06 15:41:12.731131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 15:41:12.735482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.013 [2024-11-06 15:41:12.735643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 15:41:12.735660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 15:41:12.740155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.013 [2024-11-06 15:41:12.740355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 15:41:12.740371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 15:41:12.748714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.013 [2024-11-06 15:41:12.748874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 15:41:12.748891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 15:41:12.758175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.013 [2024-11-06 15:41:12.758510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 15:41:12.758528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 15:41:12.768443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.013 [2024-11-06 15:41:12.768713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 15:41:12.768731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 15:41:12.778529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.013 [2024-11-06 15:41:12.778756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 15:41:12.778773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 15:41:12.788969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.013 [2024-11-06 15:41:12.789169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 15:41:12.789185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 15:41:12.798957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.013 [2024-11-06 15:41:12.799249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 15:41:12.799267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 15:41:12.809280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.013 [2024-11-06 15:41:12.809529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 15:41:12.809546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 15:41:12.819985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.013 [2024-11-06 15:41:12.820253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 15:41:12.820271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 15:41:12.830135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.013 [2024-11-06 15:41:12.830399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 15:41:12.830415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 15:41:12.840223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.013 [2024-11-06 15:41:12.840460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 15:41:12.840476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 15:41:12.849806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.014 [2024-11-06 15:41:12.849904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.014 [2024-11-06 15:41:12.849921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.014 [2024-11-06 15:41:12.858740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.014 [2024-11-06 15:41:12.858932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.014 [2024-11-06 15:41:12.858949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.014 [2024-11-06 15:41:12.867840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.014 [2024-11-06 15:41:12.868077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.014 [2024-11-06 15:41:12.868094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.014 [2024-11-06 15:41:12.878283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.014 [2024-11-06 15:41:12.878499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.014 [2024-11-06 15:41:12.878515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.014 [2024-11-06 15:41:12.888946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.014 [2024-11-06 15:41:12.889292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.014 [2024-11-06 15:41:12.889310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.014 [2024-11-06 15:41:12.899217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.014 [2024-11-06 15:41:12.899491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.014 [2024-11-06 15:41:12.899508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.014 [2024-11-06 15:41:12.909759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.014 [2024-11-06 15:41:12.910035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.014 [2024-11-06 15:41:12.910052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.014 [2024-11-06 15:41:12.918953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.014 [2024-11-06 15:41:12.919141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.014 [2024-11-06 15:41:12.919158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.014 [2024-11-06 15:41:12.929944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.014 [2024-11-06 15:41:12.930236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.014 [2024-11-06 15:41:12.930254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.014 [2024-11-06 15:41:12.939515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.014 [2024-11-06 15:41:12.939806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.014 [2024-11-06 15:41:12.939835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.014 [2024-11-06 15:41:12.949723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.014 [2024-11-06 15:41:12.949885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.014 [2024-11-06 15:41:12.949905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.014 [2024-11-06 15:41:12.958846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.014 [2024-11-06 15:41:12.959153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.014 [2024-11-06 15:41:12.959170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.014 [2024-11-06 15:41:12.969004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.014 [2024-11-06 15:41:12.969237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.014 [2024-11-06 15:41:12.969254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.014 [2024-11-06 15:41:12.979445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.014 [2024-11-06 15:41:12.979672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.014 [2024-11-06 15:41:12.979689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.014 [2024-11-06 15:41:12.990117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.014 [2024-11-06 15:41:12.990372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.014 [2024-11-06 15:41:12.990389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:12.999042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:12.999348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:12.999365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.009386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.009692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:13.009709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.017635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.017815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:13.017832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.022951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.023107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:13.023123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.031354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.031631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:13.031649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.037573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.037858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:13.037874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.041611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.041778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:13.041794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.045434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.045595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:13.045612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.052923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.053193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:13.053210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.057313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.057475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:13.057492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.062407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.062691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:13.062706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.066526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.066688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:13.066704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.073150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.073432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:13.073450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.077924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.078083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:13.078100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.081775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.081935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:13.081951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.088408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.088568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:13.088584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.096011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.096396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:13.096413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.104195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.104455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:13.104470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.114415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.114664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:13.114680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.124971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.125261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:13.125278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.134788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.135068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:13.135085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.145564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.145835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.296 [2024-11-06 15:41:13.145853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.296 [2024-11-06 15:41:13.154037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.296 [2024-11-06 15:41:13.154279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.154294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.157937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.158003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.158019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.161083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.161151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.161167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.164167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.164231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.164246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.167224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.167275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.167290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.171267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.171323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.171338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.176758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.176837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.176852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.180060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.180109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.180125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.182830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.182877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.182893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.186762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.186926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.186941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.191391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.191431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.191447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.195616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.195669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.195685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.198659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.198702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.198717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.201983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.202049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.202064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.204923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.204982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.204998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.208071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.208116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.208132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.211831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.211874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.211889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.214742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.214796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.214811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.217500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.217552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.217568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.220494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.220548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.220563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.223893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.223946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.223961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.226861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.226920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.226935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.230116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.230169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.230185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.234932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.235010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.235025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.238999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.239050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.239066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.242429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.242511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.242528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.248999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.249053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.249069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.252907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.252960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.252976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.255953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.256008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.256024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.258775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.258830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.258845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.261515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.261580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.261596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.264227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.264291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.297 [2024-11-06 15:41:13.264306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.297 [2024-11-06 15:41:13.266968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.297 [2024-11-06 15:41:13.267027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.298 [2024-11-06 15:41:13.267042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.298 [2024-11-06 15:41:13.269629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.298 [2024-11-06 15:41:13.269687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.298 [2024-11-06 15:41:13.269702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.298 [2024-11-06 15:41:13.272445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.298 [2024-11-06 15:41:13.272526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.298 [2024-11-06 15:41:13.272541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.566 [2024-11-06 15:41:13.275323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.566 [2024-11-06 15:41:13.275384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.566 [2024-11-06 15:41:13.275399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.566 [2024-11-06 15:41:13.278470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.566 [2024-11-06 15:41:13.278524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.566 [2024-11-06 15:41:13.278540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.566 [2024-11-06 15:41:13.281356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.566 [2024-11-06 15:41:13.281422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.566 [2024-11-06 15:41:13.281437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.566 [2024-11-06 15:41:13.284228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.566 [2024-11-06 15:41:13.284280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.566 [2024-11-06 15:41:13.284296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.566 [2024-11-06 15:41:13.287317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.566 [2024-11-06 15:41:13.287375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.566 [2024-11-06 15:41:13.287391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.566 [2024-11-06 15:41:13.289960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.566 [2024-11-06 15:41:13.290016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.566 [2024-11-06 15:41:13.290031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.566 [2024-11-06 15:41:13.292889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.566 [2024-11-06 15:41:13.292943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.566 [2024-11-06 15:41:13.292959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.566 [2024-11-06 15:41:13.295642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.566 [2024-11-06 15:41:13.295699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.566 [2024-11-06 15:41:13.295714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.566 [2024-11-06 15:41:13.298161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.566 [2024-11-06 15:41:13.298216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.566 [2024-11-06 15:41:13.298231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.566 [2024-11-06 15:41:13.300712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.566 [2024-11-06 15:41:13.300779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.566 [2024-11-06 15:41:13.300794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.566 [2024-11-06 15:41:13.303231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.566 [2024-11-06 15:41:13.303297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.566 [2024-11-06 15:41:13.303312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.566 [2024-11-06 15:41:13.305806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.566 [2024-11-06 15:41:13.305870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.566 [2024-11-06 15:41:13.305885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.566 [2024-11-06 15:41:13.308802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.308878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.308893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.313433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.313697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.313713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.320284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.320364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.320380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.327268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.327526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.327541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.337294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.337576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.337595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.346627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.346918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.346936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.356373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.356604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.356620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.360429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.360490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.360506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.363208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.363261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.363277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.366241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.366301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.366317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.369386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.369455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.369470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.374774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.374831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.374847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.377722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.377782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.377798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.380473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.380525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.380543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.383067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.383129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.383144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.385659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.385732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.385753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.389096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.389179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.389195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.391782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.391844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.391859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.394366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.394426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.394441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.396895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.396951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.396967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.399998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.400087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.400103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.403296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.403363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.403378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.406537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.406619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.406634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.412609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.412922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.412939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.421649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.567 [2024-11-06 15:41:13.421896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.567 [2024-11-06 15:41:13.421912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.567 [2024-11-06 15:41:13.430631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.430716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.430732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.440801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.441036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.441060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.451188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.451402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.451418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.461608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.461822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.461838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.471635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.471915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.471930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.482022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.482263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.482278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.489222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.489326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.489341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.492501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.492557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.492573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.495277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.495320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.495336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.498029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.498073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.498089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.500822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.500876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.500891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.503403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.503449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.503464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.506204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.506248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.506264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.509453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.509505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.509521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.515685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.515972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.515991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.518543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.518601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.518616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.521511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.521567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.521582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.524068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.524112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.524127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.526585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.526631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.526646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.529092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.529142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.529158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.531627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.531670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.531686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.534131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.534179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.534195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.536660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.536703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.536718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.539151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.539196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.539211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.541660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.541708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.541723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.568 [2024-11-06 15:41:13.544226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.568 [2024-11-06 15:41:13.544272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.568 [2024-11-06 15:41:13.544288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.832 [2024-11-06 15:41:13.546713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.832 [2024-11-06 15:41:13.546764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.832 [2024-11-06 15:41:13.546779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.832 [2024-11-06 15:41:13.549212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.832 [2024-11-06 15:41:13.549254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.832 [2024-11-06 15:41:13.549270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.832 [2024-11-06 15:41:13.551904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.832 [2024-11-06 15:41:13.551970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.832 [2024-11-06 15:41:13.551986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.832 [2024-11-06 15:41:13.555396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.832 [2024-11-06 15:41:13.555456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.832 [2024-11-06 15:41:13.555472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.832 [2024-11-06 15:41:13.562266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.832 [2024-11-06 15:41:13.562319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.832 [2024-11-06 15:41:13.562334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.832 [2024-11-06 15:41:13.569552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.832 [2024-11-06 15:41:13.569819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.832 [2024-11-06 15:41:13.569835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.832 5632.00 IOPS, 704.00 MiB/s [2024-11-06T14:41:13.815Z] [2024-11-06 15:41:13.577954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.832 [2024-11-06 15:41:13.578162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.832 [2024-11-06 15:41:13.578178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.832 [2024-11-06 15:41:13.582462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.832 [2024-11-06 15:41:13.582503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.832 [2024-11-06 15:41:13.582519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.832 [2024-11-06 15:41:13.586247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.832 [2024-11-06 15:41:13.586289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.832 [2024-11-06 15:41:13.586305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.832 [2024-11-06 15:41:13.589741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.832 [2024-11-06 15:41:13.589792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.832 [2024-11-06 15:41:13.589808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.832 [2024-11-06 15:41:13.593110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.832 [2024-11-06 15:41:13.593175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.832 [2024-11-06 15:41:13.593190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.832 [2024-11-06 15:41:13.596359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.832 [2024-11-06 15:41:13.596403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.832 [2024-11-06 15:41:13.596419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.832 [2024-11-06 15:41:13.599845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.832 [2024-11-06 15:41:13.599912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.832 [2024-11-06 15:41:13.599928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.832 [2024-11-06 15:41:13.604355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.832 [2024-11-06 15:41:13.604421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.832 [2024-11-06 15:41:13.604436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.832 [2024-11-06 15:41:13.608483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.608625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.608644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.613568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.613635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.613650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.620001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.620071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.620087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.626844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.627030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.627046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.630921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.630965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.630981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.634717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.634768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.634784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.638561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.638642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.638657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.644091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.644147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.644163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.647993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.648063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.648079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.651637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.651718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.651734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.655399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.655445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.655460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.658354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.658400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.658415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.661704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.661773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.661789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.664820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.664894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.664910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.667458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.667507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.667522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.670051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.670094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.670109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.672598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.672643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.672658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.675345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.675421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.675436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.678564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.678640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.678655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.685801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.686062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.686078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.695709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.695918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.695934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.705633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.705714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.705729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.709580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.709646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.709662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.712268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.712329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.712344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.714922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.833 [2024-11-06 15:41:13.714967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.833 [2024-11-06 15:41:13.714983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.833 [2024-11-06 15:41:13.717572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.717618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.717633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.720154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.720204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.720223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.722708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.722768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.722783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.725259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.725314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.725329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.727839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.727891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.727906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.730400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.730446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.730461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.732907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.732975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.732991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.735417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.735464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.735480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.738503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.738584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.738599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.741369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.741422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.741437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.743964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.744016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.744032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.748214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.748258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.748273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.751412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.751456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.751472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.753898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.753942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.753958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.756445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.756502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.756517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.759025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.759069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.759086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.761811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.761893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.761908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.767200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.767246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.767262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.772942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.772984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.772999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.777589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.777689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.777705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.783753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.783799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.783815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.786700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.786774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.786790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.789231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.789282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.789297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.791823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.791872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.791888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.794381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.834 [2024-11-06 15:41:13.794434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-06 15:41:13.794449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.834 [2024-11-06 15:41:13.796974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.835 [2024-11-06 15:41:13.797019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-06 15:41:13.797035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.835 [2024-11-06 15:41:13.799553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.835 [2024-11-06 15:41:13.799600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-06 15:41:13.799616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.835 [2024-11-06 15:41:13.802116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.835 [2024-11-06 15:41:13.802158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-06 15:41:13.802176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.835 [2024-11-06 15:41:13.805365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.835 [2024-11-06 15:41:13.805424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-06 15:41:13.805439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.835 [2024-11-06 15:41:13.808141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.835 [2024-11-06 15:41:13.808186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-06 15:41:13.808201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.835 [2024-11-06 15:41:13.811234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:55.835 [2024-11-06 15:41:13.811373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-06 15:41:13.811389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.815800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.815854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.815869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.818354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.818400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.818416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.820949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.820995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.821011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.823496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.823554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.823569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.826017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.826071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.826086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.830226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.830480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.830496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.838180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.838476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.838492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.847550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.847608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.847623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.852160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.852239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.852255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.855765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.855889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.855904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.861854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.861920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.861935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.865092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.865138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.865154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.868704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.868755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.868771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.871979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.872023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.872039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.875690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.875734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.875755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.879431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.879507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.879523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.882821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.882892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.882907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.887365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.887417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.887432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.891207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.891357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.891372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.899316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.899360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.899375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.903224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.903296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.903311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.907015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.907061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-06 15:41:13.907076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.098 [2024-11-06 15:41:13.913420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.098 [2024-11-06 15:41:13.913721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:13.913740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:13.918512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:13.918583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:13.918599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:13.926724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:13.926775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:13.926791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:13.930311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:13.930374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:13.930390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:13.935350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:13.935643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:13.935659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:13.941379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:13.941681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:13.941697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:13.947542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:13.947588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:13.947603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:13.951216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:13.951273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:13.951288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:13.957950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:13.958217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:13.958232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:13.966819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:13.967124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:13.967140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:13.976952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:13.977162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:13.977177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:13.986790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:13.987029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:13.987045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:13.997153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:13.997397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:13.997412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:14.006926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:14.007125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:14.007140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:14.014979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:14.015235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:14.015251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:14.025766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:14.025987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:14.026002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:14.032787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:14.032860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:14.032875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:14.035844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:14.035902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:14.035917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:14.038546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:14.038597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:14.038612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:14.041245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:14.041290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:14.041305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:14.043923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:14.043977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:14.043992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:14.046587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:14.046642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:14.046658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:14.049217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:14.049283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:14.049298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:14.051863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:14.051909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:14.051925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:14.054383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:14.054433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.099 [2024-11-06 15:41:14.054448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.099 [2024-11-06 15:41:14.056913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.099 [2024-11-06 15:41:14.056966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.100 [2024-11-06 15:41:14.056982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.100 [2024-11-06 15:41:14.059430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.100 [2024-11-06 15:41:14.059476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.100 [2024-11-06 15:41:14.059494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.100 [2024-11-06 15:41:14.061986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.100 [2024-11-06 15:41:14.062034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.100 [2024-11-06 15:41:14.062049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.100 [2024-11-06 15:41:14.064914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.100 [2024-11-06 15:41:14.064990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.100 [2024-11-06 15:41:14.065006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.100 [2024-11-06 15:41:14.070188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.100 [2024-11-06 15:41:14.070429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.100 [2024-11-06 15:41:14.070445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.362 [2024-11-06 15:41:14.079950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.362 [2024-11-06 15:41:14.080168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.362 [2024-11-06 15:41:14.080183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.362 [2024-11-06 15:41:14.084431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.362 [2024-11-06 15:41:14.084523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.362 [2024-11-06 15:41:14.084538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.362 [2024-11-06 15:41:14.087470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.362 [2024-11-06 15:41:14.087529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.362 [2024-11-06 15:41:14.087544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.362 [2024-11-06 15:41:14.090028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.362 [2024-11-06 15:41:14.090076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.362 [2024-11-06 15:41:14.090091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.362 [2024-11-06 15:41:14.092988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.362 [2024-11-06 15:41:14.093052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.362 [2024-11-06 15:41:14.093068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.362 [2024-11-06 15:41:14.098702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.362 [2024-11-06 15:41:14.098755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.362 [2024-11-06 15:41:14.098771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.362 [2024-11-06 15:41:14.101234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.362 [2024-11-06 15:41:14.101281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.362 [2024-11-06 15:41:14.101296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.362 [2024-11-06 15:41:14.103759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.362 [2024-11-06 15:41:14.103805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.362 [2024-11-06 15:41:14.103820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.362 [2024-11-06 15:41:14.106269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.362 [2024-11-06 15:41:14.106331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.362 [2024-11-06 15:41:14.106346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.362 [2024-11-06 15:41:14.109148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.362 [2024-11-06 15:41:14.109206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.362 [2024-11-06 15:41:14.109221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.362 [2024-11-06 15:41:14.111773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.362 [2024-11-06 15:41:14.111823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.362 [2024-11-06 15:41:14.111838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.362 [2024-11-06 15:41:14.114281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.114331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.114346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.116810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.116864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.116879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.119302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.119351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.119366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.121823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.121871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.121887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.124341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.124434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.124449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.127556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.127617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.127633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.135830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.136023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.136039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.145995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.146227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.146242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.156126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.156405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.156422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.164569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.164676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.164691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.168871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.168916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.168931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.171616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.171688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.171707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.174526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.174575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.174591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.177240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.177287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.177302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.182306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.182350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.182366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.185040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.185085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.185101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.187624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.187671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.187686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.190186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.190231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.190246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.192848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.192917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.192932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.196178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.196236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.196251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.205301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.205498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.205517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.214080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.214339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.214356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.222841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.223084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.223100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.229686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.229756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.229772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.363 [2024-11-06 15:41:14.232368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.363 [2024-11-06 15:41:14.232413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.363 [2024-11-06 15:41:14.232429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.364 [2024-11-06 15:41:14.235312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.364 [2024-11-06 15:41:14.235357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.364 [2024-11-06 15:41:14.235372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.364 [2024-11-06 15:41:14.237964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.364 [2024-11-06 15:41:14.238031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.364 [2024-11-06 15:41:14.238046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.364 [2024-11-06 15:41:14.240588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.364 [2024-11-06 15:41:14.240631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.364 [2024-11-06 15:41:14.240646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.364 [2024-11-06 15:41:14.243196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.364 [2024-11-06 15:41:14.243239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.364 [2024-11-06 15:41:14.243254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.364 [2024-11-06 15:41:14.245942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.364 [2024-11-06 15:41:14.246022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.364 [2024-11-06 15:41:14.246037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.364 [2024-11-06 15:41:14.248718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.364 [2024-11-06 15:41:14.248785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.364 [2024-11-06 15:41:14.248800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.364 [2024-11-06 15:41:14.251466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.364 [2024-11-06 15:41:14.251517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.364 [2024-11-06 15:41:14.251532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.364 [2024-11-06 15:41:14.254005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.364 [2024-11-06 15:41:14.254047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.364 [2024-11-06 15:41:14.254062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.364 [2024-11-06 15:41:14.256505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.364 [2024-11-06 15:41:14.256552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.364 [2024-11-06 15:41:14.256567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.364 [2024-11-06 15:41:14.259959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.364 [2024-11-06 15:41:14.260022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.364 [2024-11-06 15:41:14.260037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.364 [2024-11-06 15:41:14.264384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.364 [2024-11-06 15:41:14.264441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.364 [2024-11-06 15:41:14.264457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.364 [2024-11-06 15:41:14.267995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.364 [2024-11-06 15:41:14.268045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.364 [2024-11-06 15:41:14.268060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.364 [2024-11-06 15:41:14.270611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.364 [2024-11-06 15:41:14.270673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.364 [2024-11-06 15:41:14.270688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.364 [2024-11-06 15:41:14.273104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.364 [2024-11-06 15:41:14.273164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.364 [2024-11-06 15:41:14.273180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.364 [2024-11-06 15:41:14.276454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.364 [2024-11-06 15:41:14.276518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.364 [2024-11-06 15:41:14.276533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.364 [2024-11-06 15:41:14.281222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.364 [2024-11-06 15:41:14.281313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.364 [2024-11-06 15:41:14.281328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.364 [2024-11-06 15:41:14.291528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.364 [2024-11-06 15:41:14.291812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.364 [2024-11-06 15:41:14.291828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.364 [2024-11-06 15:41:14.302061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.364 [2024-11-06 15:41:14.302195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.364 [2024-11-06 15:41:14.302211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.364 [2024-11-06 15:41:14.312225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.364 [2024-11-06 15:41:14.312461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.364 [2024-11-06 15:41:14.312477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.364 [2024-11-06 15:41:14.322092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.364 [2024-11-06 15:41:14.322334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.364 [2024-11-06 15:41:14.322349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.364 [2024-11-06 15:41:14.332500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.364 [2024-11-06 15:41:14.332709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.364 [2024-11-06 15:41:14.332724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.626 [2024-11-06 15:41:14.342628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.626 [2024-11-06 15:41:14.342951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.626 [2024-11-06 15:41:14.342971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.626 [2024-11-06 15:41:14.352261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.626 [2024-11-06 15:41:14.352542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.626 [2024-11-06 15:41:14.352559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.626 [2024-11-06 15:41:14.362708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.626 [2024-11-06 15:41:14.362819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.626 [2024-11-06 15:41:14.362834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.626 [2024-11-06 15:41:14.373177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.626 [2024-11-06 15:41:14.373356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.626 [2024-11-06 15:41:14.373371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.383129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.383435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.383451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.392918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.392992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.393007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.402349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.402622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.402638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.412583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.412841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.412857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.422688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.422789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.422805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.432882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.433129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.433144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.442987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.443294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.443309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.453725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.454018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.454034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.463961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.464221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.464237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.474436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.474669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.474685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.485073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.485333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.485349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.495633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.495932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.495949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.506728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.506978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.506994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.516005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.516235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.516250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.525777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.525888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.525904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.530853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.530898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.530913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.534486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.534530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.534545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.538452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.538499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.538514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.546210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.546265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.546281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.549807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.549852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.549868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.552755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.552886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.552902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.556631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.556728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.556744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.561589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.561730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.561754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.567871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.568131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.627 [2024-11-06 15:41:14.568146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.627 [2024-11-06 15:41:14.576335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01a90) with pdu=0x200016efef90 00:28:56.627 [2024-11-06 15:41:14.576613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.628 [2024-11-06 15:41:14.576628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.628 5939.00 IOPS, 742.38 MiB/s 00:28:56.628 Latency(us) 00:28:56.628 [2024-11-06T14:41:14.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.628 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:56.628 nvme0n1 : 2.01 5928.10 741.01 0.00 0.00 2692.95 1174.19 14417.92 00:28:56.628 [2024-11-06T14:41:14.611Z] =================================================================================================================== 00:28:56.628 [2024-11-06T14:41:14.611Z] Total : 5928.10 741.01 0.00 0.00 2692.95 1174.19 14417.92 00:28:56.628 { 00:28:56.628 "results": [ 00:28:56.628 { 00:28:56.628 "job": "nvme0n1", 00:28:56.628 "core_mask": "0x2", 00:28:56.628 "workload": "randwrite", 00:28:56.628 "status": "finished", 00:28:56.628 "queue_depth": 16, 00:28:56.628 "io_size": 131072, 00:28:56.628 "runtime": 2.006884, 00:28:56.628 "iops": 5928.09549530516, 00:28:56.628 "mibps": 741.011936913145, 00:28:56.628 "io_failed": 0, 00:28:56.628 "io_timeout": 0, 00:28:56.628 "avg_latency_us": 2692.94728194783, 00:28:56.628 "min_latency_us": 1174.1866666666667, 00:28:56.628 "max_latency_us": 14417.92 00:28:56.628 } 00:28:56.628 ], 00:28:56.628 "core_count": 1 00:28:56.628 } 00:28:56.889 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:56.889 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:56.889 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:56.889 | .driver_specific 00:28:56.889 | .nvme_error 00:28:56.889 | .status_code 00:28:56.889 | .command_transient_transport_error' 00:28:56.889 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:56.889 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 383 > 0 )) 00:28:56.889 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3954710 00:28:56.889 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3954710 ']' 00:28:56.889 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3954710 00:28:56.889 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:56.889 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:56.889 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3954710 00:28:57.150 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:57.150 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:57.150 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3954710' 00:28:57.150 killing process with pid 3954710 00:28:57.150 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3954710 00:28:57.150 Received shutdown signal, test time was about 2.000000 seconds 00:28:57.150 00:28:57.150 Latency(us) 00:28:57.150 [2024-11-06T14:41:15.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.150 [2024-11-06T14:41:15.133Z] =================================================================================================================== 00:28:57.150 [2024-11-06T14:41:15.133Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:57.150 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3954710 00:28:57.150 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3952304 00:28:57.150 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3952304 ']' 00:28:57.150 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3952304 00:28:57.150 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:57.150 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:57.150 15:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3952304 00:28:57.150 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:57.150 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:57.150 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3952304' 00:28:57.150 killing process with pid 3952304 00:28:57.150 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3952304 00:28:57.150 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3952304 00:28:57.411 00:28:57.411 real 0m16.482s 00:28:57.411 user 0m32.589s 00:28:57.411 sys 0m3.647s 00:28:57.411 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:57.411 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.411 ************************************ 00:28:57.411 END TEST nvmf_digest_error 00:28:57.411 ************************************ 00:28:57.411 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:57.411 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:57.411 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:57.411 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:57.411 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:57.411 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:57.411 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:57.411 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:57.411 rmmod nvme_tcp 00:28:57.411 rmmod nvme_fabrics 00:28:57.411 rmmod nvme_keyring 00:28:57.411 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:57.411 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:57.411 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:57.411 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3952304 ']' 00:28:57.411 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3952304 00:28:57.412 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 3952304 ']' 00:28:57.412 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 3952304 00:28:57.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3952304) - No such process 00:28:57.412 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 3952304 is not found' 00:28:57.412 Process with pid 3952304 is not found 00:28:57.412 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:57.412 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:57.412 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:57.412 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:57.412 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:57.412 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:57.412 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:57.412 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:57.412 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:57.412 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.412 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.412 15:41:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:59.956 00:28:59.956 real 0m43.629s 00:28:59.956 user 1m8.052s 00:28:59.956 sys 0m13.453s 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:59.956 ************************************ 00:28:59.956 END TEST nvmf_digest 00:28:59.956 ************************************ 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.956 ************************************ 00:28:59.956 START TEST nvmf_bdevperf 00:28:59.956 ************************************ 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:59.956 * Looking for test storage... 00:28:59.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:59.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.956 --rc genhtml_branch_coverage=1 00:28:59.956 --rc genhtml_function_coverage=1 00:28:59.956 --rc genhtml_legend=1 00:28:59.956 --rc geninfo_all_blocks=1 00:28:59.956 --rc geninfo_unexecuted_blocks=1 00:28:59.956 00:28:59.956 ' 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:59.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.956 --rc genhtml_branch_coverage=1 00:28:59.956 --rc genhtml_function_coverage=1 00:28:59.956 --rc genhtml_legend=1 00:28:59.956 --rc geninfo_all_blocks=1 00:28:59.956 --rc geninfo_unexecuted_blocks=1 00:28:59.956 00:28:59.956 ' 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:59.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.956 --rc genhtml_branch_coverage=1 00:28:59.956 --rc genhtml_function_coverage=1 00:28:59.956 --rc genhtml_legend=1 00:28:59.956 --rc geninfo_all_blocks=1 00:28:59.956 --rc geninfo_unexecuted_blocks=1 00:28:59.956 00:28:59.956 ' 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:59.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.956 --rc genhtml_branch_coverage=1 00:28:59.956 --rc genhtml_function_coverage=1 00:28:59.956 --rc genhtml_legend=1 00:28:59.956 --rc geninfo_all_blocks=1 00:28:59.956 --rc geninfo_unexecuted_blocks=1 00:28:59.956 00:28:59.956 ' 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.956 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:59.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:59.957 15:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:08.106 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:08.106 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:08.106 Found net devices under 0000:31:00.0: cvl_0_0 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:08.106 Found net devices under 0000:31:00.1: cvl_0_1 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:08.106 15:41:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:08.106 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:08.106 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:08.106 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:08.106 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:08.106 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:08.106 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:08.106 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:08.106 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:08.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:08.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:29:08.107 00:29:08.107 --- 10.0.0.2 ping statistics --- 00:29:08.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.107 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:08.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:08.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:29:08.107 00:29:08.107 --- 10.0.0.1 ping statistics --- 00:29:08.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.107 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3959758 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3959758 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3959758 ']' 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:08.107 15:41:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.107 [2024-11-06 15:41:25.345920] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:29:08.107 [2024-11-06 15:41:25.345985] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.107 [2024-11-06 15:41:25.448550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:08.107 [2024-11-06 15:41:25.500581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.107 [2024-11-06 15:41:25.500632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.107 [2024-11-06 15:41:25.500641] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:08.107 [2024-11-06 15:41:25.500649] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:08.107 [2024-11-06 15:41:25.500655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.107 [2024-11-06 15:41:25.502504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:08.107 [2024-11-06 15:41:25.502662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.107 [2024-11-06 15:41:25.502662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:08.368 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:08.368 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:29:08.368 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:08.368 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:08.368 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.368 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.368 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:08.368 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.368 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.368 [2024-11-06 15:41:26.227499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.368 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.368 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:08.368 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.368 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.368 Malloc0 00:29:08.368 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.368 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:08.368 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.368 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.368 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.368 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:08.369 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.369 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.369 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.369 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:08.369 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.369 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.369 [2024-11-06 15:41:26.305550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.369 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.369 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:08.369 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:08.369 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:08.369 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:08.369 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.369 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.369 { 00:29:08.369 "params": { 00:29:08.369 "name": "Nvme$subsystem", 00:29:08.369 "trtype": "$TEST_TRANSPORT", 00:29:08.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.369 "adrfam": "ipv4", 00:29:08.369 "trsvcid": "$NVMF_PORT", 00:29:08.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.369 "hdgst": ${hdgst:-false}, 00:29:08.369 "ddgst": ${ddgst:-false} 00:29:08.369 }, 00:29:08.369 "method": "bdev_nvme_attach_controller" 00:29:08.369 } 00:29:08.369 EOF 00:29:08.369 )") 00:29:08.369 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:08.369 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:08.369 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:08.369 15:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:08.369 "params": { 00:29:08.369 "name": "Nvme1", 00:29:08.369 "trtype": "tcp", 00:29:08.369 "traddr": "10.0.0.2", 00:29:08.369 "adrfam": "ipv4", 00:29:08.369 "trsvcid": "4420", 00:29:08.369 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:08.369 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:08.369 "hdgst": false, 00:29:08.369 "ddgst": false 00:29:08.369 }, 00:29:08.369 "method": "bdev_nvme_attach_controller" 00:29:08.369 }' 00:29:08.630 [2024-11-06 15:41:26.365544] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:29:08.630 [2024-11-06 15:41:26.365607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3959833 ] 00:29:08.630 [2024-11-06 15:41:26.456966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.630 [2024-11-06 15:41:26.510654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.892 Running I/O for 1 seconds... 00:29:09.835 8720.00 IOPS, 34.06 MiB/s 00:29:09.835 Latency(us) 00:29:09.835 [2024-11-06T14:41:27.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.835 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:09.835 Verification LBA range: start 0x0 length 0x4000 00:29:09.835 Nvme1n1 : 1.01 8758.21 34.21 0.00 0.00 14550.75 2921.81 12178.77 00:29:09.835 [2024-11-06T14:41:27.818Z] =================================================================================================================== 00:29:09.835 [2024-11-06T14:41:27.818Z] Total : 8758.21 34.21 0.00 0.00 14550.75 2921.81 12178.77 00:29:10.096 15:41:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3960135 00:29:10.096 15:41:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:10.096 15:41:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:10.096 15:41:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:10.096 15:41:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:10.096 15:41:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:10.096 15:41:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:10.096 15:41:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:10.096 { 00:29:10.096 "params": { 00:29:10.096 "name": "Nvme$subsystem", 00:29:10.096 "trtype": "$TEST_TRANSPORT", 00:29:10.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.096 "adrfam": "ipv4", 00:29:10.096 "trsvcid": "$NVMF_PORT", 00:29:10.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.096 "hdgst": ${hdgst:-false}, 00:29:10.096 "ddgst": ${ddgst:-false} 00:29:10.096 }, 00:29:10.096 "method": "bdev_nvme_attach_controller" 00:29:10.096 } 00:29:10.096 EOF 00:29:10.096 )") 00:29:10.096 15:41:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:10.096 15:41:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:10.096 15:41:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:10.096 15:41:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:10.096 "params": { 00:29:10.096 "name": "Nvme1", 00:29:10.096 "trtype": "tcp", 00:29:10.096 "traddr": "10.0.0.2", 00:29:10.096 "adrfam": "ipv4", 00:29:10.096 "trsvcid": "4420", 00:29:10.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:10.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:10.096 "hdgst": false, 00:29:10.096 "ddgst": false 00:29:10.096 }, 00:29:10.096 "method": "bdev_nvme_attach_controller" 00:29:10.096 }' 00:29:10.096 [2024-11-06 15:41:27.917764] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:29:10.096 [2024-11-06 15:41:27.917820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3960135 ] 00:29:10.096 [2024-11-06 15:41:28.007563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.096 [2024-11-06 15:41:28.042478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.357 Running I/O for 15 seconds... 00:29:12.702 11051.00 IOPS, 43.17 MiB/s [2024-11-06T14:41:30.950Z] 11106.00 IOPS, 43.38 MiB/s [2024-11-06T14:41:30.950Z] 15:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3959758 00:29:12.967 15:41:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:12.967 [2024-11-06 15:41:30.871576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.967 [2024-11-06 15:41:30.871619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.967 [2024-11-06 15:41:30.871641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.967 [2024-11-06 15:41:30.871649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.967 [2024-11-06 15:41:30.871659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.967 [2024-11-06 15:41:30.871667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.967 [2024-11-06 15:41:30.871677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.967 [2024-11-06 15:41:30.871687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.967 [2024-11-06 15:41:30.871698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.967 [2024-11-06 15:41:30.871708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.967 [2024-11-06 15:41:30.871718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.967 [2024-11-06 15:41:30.871728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.967 [2024-11-06 15:41:30.871737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.967 [2024-11-06 15:41:30.871839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.967 [2024-11-06 15:41:30.871851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.967 [2024-11-06 15:41:30.871860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.967 [2024-11-06 15:41:30.871871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.967 [2024-11-06 15:41:30.871880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.967 [2024-11-06 15:41:30.871890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.967 [2024-11-06 15:41:30.871899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.967 [2024-11-06 15:41:30.871910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.967 [2024-11-06 15:41:30.871920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.967 [2024-11-06 15:41:30.871931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.967 [2024-11-06 15:41:30.871944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.967 [2024-11-06 15:41:30.871954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.967 [2024-11-06 15:41:30.871961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.967 [2024-11-06 15:41:30.871970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.967 [2024-11-06 15:41:30.871977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.967 [2024-11-06 15:41:30.871987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.967 [2024-11-06 15:41:30.871995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.967 [2024-11-06 15:41:30.872004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.967 [2024-11-06 15:41:30.872012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.967 [2024-11-06 15:41:30.872021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.967 [2024-11-06 15:41:30.872028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.967 [2024-11-06 15:41:30.872038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.967 [2024-11-06 15:41:30.872045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.967 [2024-11-06 15:41:30.872055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.967 [2024-11-06 15:41:30.872062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.967 [2024-11-06 15:41:30.872072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.968 [2024-11-06 15:41:30.872669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.968 [2024-11-06 15:41:30.872676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.872685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.872692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.872701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.872708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.872717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.872724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.872734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.872741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.872755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.872762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.872772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.872778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.872789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.872796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.872805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.872813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.872822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.872829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.872839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.872846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.872856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.872863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.872872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.872879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.872888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.872896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.872905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.872912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.872921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.872928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.872937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.872945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.872954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.872961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.872970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.872977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.872986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.872994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.873004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.873011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.873020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.873028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.873037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.873044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.873054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.873061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.873070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.873078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.873087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.873094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.873103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.873111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.873120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.873127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.873137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.873144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.873153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.873160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.873169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.873177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.873186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.873194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.873203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.873211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.873220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.873227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.873237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.873244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.873253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.873260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.873269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.969 [2024-11-06 15:41:30.873276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.969 [2024-11-06 15:41:30.873286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.970 [2024-11-06 15:41:30.873293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.970 [2024-11-06 15:41:30.873309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.970 [2024-11-06 15:41:30.873325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.970 [2024-11-06 15:41:30.873341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.970 [2024-11-06 15:41:30.873358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.970 [2024-11-06 15:41:30.873374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.970 [2024-11-06 15:41:30.873391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.970 [2024-11-06 15:41:30.873658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.970 [2024-11-06 15:41:30.873862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.970 [2024-11-06 15:41:30.873871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb53c0 is same with the state(6) to be set 00:29:12.970 [2024-11-06 15:41:30.873880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:12.970 [2024-11-06 15:41:30.873886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:12.970 [2024-11-06 15:41:30.873893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95576 len:8 PRP1 0x0 PRP2 0x0 00:29:12.971 [2024-11-06 15:41:30.873900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.971 [2024-11-06 15:41:30.873983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.971 [2024-11-06 15:41:30.873994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.971 [2024-11-06 15:41:30.874003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.971 [2024-11-06 15:41:30.874010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.971 [2024-11-06 15:41:30.874018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.971 [2024-11-06 15:41:30.874026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.971 [2024-11-06 15:41:30.874034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.971 [2024-11-06 15:41:30.874041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.971 [2024-11-06 15:41:30.874049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:12.971 [2024-11-06 15:41:30.878553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.971 [2024-11-06 15:41:30.878574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:12.971 [2024-11-06 15:41:30.879359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-11-06 15:41:30.879376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:12.971 [2024-11-06 15:41:30.879385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:12.971 [2024-11-06 15:41:30.879604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:12.971 [2024-11-06 15:41:30.879828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.971 [2024-11-06 15:41:30.879837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.971 [2024-11-06 15:41:30.879846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.971 [2024-11-06 15:41:30.879855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.971 [2024-11-06 15:41:30.892740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.971 [2024-11-06 15:41:30.893388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-11-06 15:41:30.893426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:12.971 [2024-11-06 15:41:30.893439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:12.971 [2024-11-06 15:41:30.893684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:12.971 [2024-11-06 15:41:30.893918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.971 [2024-11-06 15:41:30.893928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.971 [2024-11-06 15:41:30.893937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.971 [2024-11-06 15:41:30.893945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.971 [2024-11-06 15:41:30.906825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.971 [2024-11-06 15:41:30.907433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-11-06 15:41:30.907472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:12.971 [2024-11-06 15:41:30.907483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:12.971 [2024-11-06 15:41:30.907722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:12.971 [2024-11-06 15:41:30.907954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.971 [2024-11-06 15:41:30.907964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.971 [2024-11-06 15:41:30.907972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.971 [2024-11-06 15:41:30.907980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.971 [2024-11-06 15:41:30.920647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.971 [2024-11-06 15:41:30.921269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-11-06 15:41:30.921309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:12.971 [2024-11-06 15:41:30.921320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:12.971 [2024-11-06 15:41:30.921572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:12.971 [2024-11-06 15:41:30.921813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.971 [2024-11-06 15:41:30.921823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.971 [2024-11-06 15:41:30.921830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.971 [2024-11-06 15:41:30.921838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.971 [2024-11-06 15:41:30.934563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.971 [2024-11-06 15:41:30.935219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-11-06 15:41:30.935261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:12.971 [2024-11-06 15:41:30.935276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:12.971 [2024-11-06 15:41:30.935516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:12.971 [2024-11-06 15:41:30.935739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.971 [2024-11-06 15:41:30.935757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.971 [2024-11-06 15:41:30.935765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.971 [2024-11-06 15:41:30.935773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.234 [2024-11-06 15:41:30.948446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.234 [2024-11-06 15:41:30.949026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.234 [2024-11-06 15:41:30.949068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.234 [2024-11-06 15:41:30.949081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.234 [2024-11-06 15:41:30.949325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.234 [2024-11-06 15:41:30.949548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.234 [2024-11-06 15:41:30.949556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.234 [2024-11-06 15:41:30.949565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.235 [2024-11-06 15:41:30.949573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.235 [2024-11-06 15:41:30.962253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.235 [2024-11-06 15:41:30.962881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.235 [2024-11-06 15:41:30.962924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.235 [2024-11-06 15:41:30.962936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.235 [2024-11-06 15:41:30.963178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.235 [2024-11-06 15:41:30.963400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.235 [2024-11-06 15:41:30.963409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.235 [2024-11-06 15:41:30.963417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.235 [2024-11-06 15:41:30.963425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.235 [2024-11-06 15:41:30.976110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.235 [2024-11-06 15:41:30.976691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.235 [2024-11-06 15:41:30.976735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.235 [2024-11-06 15:41:30.976756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.235 [2024-11-06 15:41:30.977002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.235 [2024-11-06 15:41:30.977230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.235 [2024-11-06 15:41:30.977239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.235 [2024-11-06 15:41:30.977246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.235 [2024-11-06 15:41:30.977254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.235 [2024-11-06 15:41:30.989932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.235 [2024-11-06 15:41:30.990618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.235 [2024-11-06 15:41:30.990666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.235 [2024-11-06 15:41:30.990677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.235 [2024-11-06 15:41:30.990932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.235 [2024-11-06 15:41:30.991157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.235 [2024-11-06 15:41:30.991165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.235 [2024-11-06 15:41:30.991173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.235 [2024-11-06 15:41:30.991181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.235 [2024-11-06 15:41:31.003864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.235 [2024-11-06 15:41:31.004560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.235 [2024-11-06 15:41:31.004611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.235 [2024-11-06 15:41:31.004622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.235 [2024-11-06 15:41:31.004881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.235 [2024-11-06 15:41:31.005106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.235 [2024-11-06 15:41:31.005115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.235 [2024-11-06 15:41:31.005123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.235 [2024-11-06 15:41:31.005131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.235 [2024-11-06 15:41:31.017819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.235 [2024-11-06 15:41:31.018514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.235 [2024-11-06 15:41:31.018567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.235 [2024-11-06 15:41:31.018579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.235 [2024-11-06 15:41:31.018840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.235 [2024-11-06 15:41:31.019066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.235 [2024-11-06 15:41:31.019074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.235 [2024-11-06 15:41:31.019089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.235 [2024-11-06 15:41:31.019097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.235 [2024-11-06 15:41:31.031609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.235 [2024-11-06 15:41:31.032258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.235 [2024-11-06 15:41:31.032312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.235 [2024-11-06 15:41:31.032325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.235 [2024-11-06 15:41:31.032573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.235 [2024-11-06 15:41:31.032811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.235 [2024-11-06 15:41:31.032821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.235 [2024-11-06 15:41:31.032829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.235 [2024-11-06 15:41:31.032838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.235 [2024-11-06 15:41:31.045538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.235 [2024-11-06 15:41:31.046250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.235 [2024-11-06 15:41:31.046307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.235 [2024-11-06 15:41:31.046320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.235 [2024-11-06 15:41:31.046571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.235 [2024-11-06 15:41:31.046809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.235 [2024-11-06 15:41:31.046819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.235 [2024-11-06 15:41:31.046828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.235 [2024-11-06 15:41:31.046837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.235 [2024-11-06 15:41:31.059351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.235 [2024-11-06 15:41:31.060082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.235 [2024-11-06 15:41:31.060144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.235 [2024-11-06 15:41:31.060157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.235 [2024-11-06 15:41:31.060412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.235 [2024-11-06 15:41:31.060638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.235 [2024-11-06 15:41:31.060648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.235 [2024-11-06 15:41:31.060656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.235 [2024-11-06 15:41:31.060666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.235 [2024-11-06 15:41:31.073188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.235 [2024-11-06 15:41:31.073856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.235 [2024-11-06 15:41:31.073902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.235 [2024-11-06 15:41:31.073913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.235 [2024-11-06 15:41:31.074152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.235 [2024-11-06 15:41:31.074375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.235 [2024-11-06 15:41:31.074383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.236 [2024-11-06 15:41:31.074391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.236 [2024-11-06 15:41:31.074400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.236 [2024-11-06 15:41:31.087103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.236 [2024-11-06 15:41:31.087787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.236 [2024-11-06 15:41:31.087849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.236 [2024-11-06 15:41:31.087861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.236 [2024-11-06 15:41:31.088115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.236 [2024-11-06 15:41:31.088341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.236 [2024-11-06 15:41:31.088350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.236 [2024-11-06 15:41:31.088359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.236 [2024-11-06 15:41:31.088368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.236 [2024-11-06 15:41:31.100951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.236 [2024-11-06 15:41:31.101662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.236 [2024-11-06 15:41:31.101723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.236 [2024-11-06 15:41:31.101736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.236 [2024-11-06 15:41:31.102000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.236 [2024-11-06 15:41:31.102227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.236 [2024-11-06 15:41:31.102236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.236 [2024-11-06 15:41:31.102243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.236 [2024-11-06 15:41:31.102253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.236 [2024-11-06 15:41:31.114754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.236 [2024-11-06 15:41:31.115465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.236 [2024-11-06 15:41:31.115526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.236 [2024-11-06 15:41:31.115546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.236 [2024-11-06 15:41:31.115811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.236 [2024-11-06 15:41:31.116038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.236 [2024-11-06 15:41:31.116047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.236 [2024-11-06 15:41:31.116055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.236 [2024-11-06 15:41:31.116063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.236 [2024-11-06 15:41:31.128596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.236 [2024-11-06 15:41:31.129185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.236 [2024-11-06 15:41:31.129214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.236 [2024-11-06 15:41:31.129223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.236 [2024-11-06 15:41:31.129445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.236 [2024-11-06 15:41:31.129666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.236 [2024-11-06 15:41:31.129674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.236 [2024-11-06 15:41:31.129682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.236 [2024-11-06 15:41:31.129690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.236 [2024-11-06 15:41:31.142456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.236 [2024-11-06 15:41:31.143178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.236 [2024-11-06 15:41:31.143239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.236 [2024-11-06 15:41:31.143252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.236 [2024-11-06 15:41:31.143505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.236 [2024-11-06 15:41:31.143732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.236 [2024-11-06 15:41:31.143743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.236 [2024-11-06 15:41:31.143762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.236 [2024-11-06 15:41:31.143771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.236 [2024-11-06 15:41:31.156277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.236 [2024-11-06 15:41:31.157018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.236 [2024-11-06 15:41:31.157079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.236 [2024-11-06 15:41:31.157092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.236 [2024-11-06 15:41:31.157346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.236 [2024-11-06 15:41:31.157579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.236 [2024-11-06 15:41:31.157589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.236 [2024-11-06 15:41:31.157597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.236 [2024-11-06 15:41:31.157606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.236 [2024-11-06 15:41:31.170119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.236 [2024-11-06 15:41:31.170817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.236 [2024-11-06 15:41:31.170879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.236 [2024-11-06 15:41:31.170892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.236 [2024-11-06 15:41:31.171145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.236 [2024-11-06 15:41:31.171371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.236 [2024-11-06 15:41:31.171380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.236 [2024-11-06 15:41:31.171388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.236 [2024-11-06 15:41:31.171397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.236 [2024-11-06 15:41:31.184111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.236 [2024-11-06 15:41:31.184859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.236 [2024-11-06 15:41:31.184921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.236 [2024-11-06 15:41:31.184933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.236 [2024-11-06 15:41:31.185186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.236 [2024-11-06 15:41:31.185413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.236 [2024-11-06 15:41:31.185422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.236 [2024-11-06 15:41:31.185430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.236 [2024-11-06 15:41:31.185439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.236 [2024-11-06 15:41:31.197947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.236 [2024-11-06 15:41:31.198656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.236 [2024-11-06 15:41:31.198718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.236 [2024-11-06 15:41:31.198731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.236 [2024-11-06 15:41:31.198998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.236 [2024-11-06 15:41:31.199224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.236 [2024-11-06 15:41:31.199234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.236 [2024-11-06 15:41:31.199248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.237 [2024-11-06 15:41:31.199257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.237 [2024-11-06 15:41:31.211756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.237 [2024-11-06 15:41:31.212481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.237 [2024-11-06 15:41:31.212541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.237 [2024-11-06 15:41:31.212554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.237 [2024-11-06 15:41:31.212822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.237 [2024-11-06 15:41:31.213049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.237 [2024-11-06 15:41:31.213057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.237 [2024-11-06 15:41:31.213067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.237 [2024-11-06 15:41:31.213076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.499 [2024-11-06 15:41:31.225617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.499 [2024-11-06 15:41:31.226307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.499 [2024-11-06 15:41:31.226369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.499 [2024-11-06 15:41:31.226382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.499 [2024-11-06 15:41:31.226636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.499 [2024-11-06 15:41:31.226876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.499 [2024-11-06 15:41:31.226886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.499 [2024-11-06 15:41:31.226894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.500 [2024-11-06 15:41:31.226903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.500 [2024-11-06 15:41:31.239612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.500 [2024-11-06 15:41:31.240324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-11-06 15:41:31.240385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.500 [2024-11-06 15:41:31.240398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.500 [2024-11-06 15:41:31.240653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.500 [2024-11-06 15:41:31.240891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.500 [2024-11-06 15:41:31.240901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.500 [2024-11-06 15:41:31.240910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.500 [2024-11-06 15:41:31.240919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.500 [2024-11-06 15:41:31.253429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.500 [2024-11-06 15:41:31.254147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-11-06 15:41:31.254209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.500 [2024-11-06 15:41:31.254221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.500 [2024-11-06 15:41:31.254474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.500 [2024-11-06 15:41:31.254700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.500 [2024-11-06 15:41:31.254709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.500 [2024-11-06 15:41:31.254717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.500 [2024-11-06 15:41:31.254726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.500 [2024-11-06 15:41:31.267250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.500 [2024-11-06 15:41:31.267902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-11-06 15:41:31.267933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.500 [2024-11-06 15:41:31.267942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.500 [2024-11-06 15:41:31.268164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.500 [2024-11-06 15:41:31.268385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.500 [2024-11-06 15:41:31.268395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.500 [2024-11-06 15:41:31.268402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.500 [2024-11-06 15:41:31.268410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.500 [2024-11-06 15:41:31.281100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.500 [2024-11-06 15:41:31.281672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-11-06 15:41:31.281697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.500 [2024-11-06 15:41:31.281705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.500 [2024-11-06 15:41:31.281933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.500 [2024-11-06 15:41:31.282154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.500 [2024-11-06 15:41:31.282162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.500 [2024-11-06 15:41:31.282170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.500 [2024-11-06 15:41:31.282177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.500 [2024-11-06 15:41:31.295063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.500 [2024-11-06 15:41:31.295660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-11-06 15:41:31.295684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.500 [2024-11-06 15:41:31.295700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.500 [2024-11-06 15:41:31.295931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.500 [2024-11-06 15:41:31.296152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.500 [2024-11-06 15:41:31.296160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.500 [2024-11-06 15:41:31.296168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.500 [2024-11-06 15:41:31.296175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.500 [2024-11-06 15:41:31.308847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.500 [2024-11-06 15:41:31.309509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-11-06 15:41:31.309570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.500 [2024-11-06 15:41:31.309583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.500 [2024-11-06 15:41:31.309851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.500 [2024-11-06 15:41:31.310078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.500 [2024-11-06 15:41:31.310086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.500 [2024-11-06 15:41:31.310094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.500 [2024-11-06 15:41:31.310103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.500 [2024-11-06 15:41:31.322835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.500 [2024-11-06 15:41:31.323571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-11-06 15:41:31.323631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.500 [2024-11-06 15:41:31.323644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.500 [2024-11-06 15:41:31.323912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.500 9434.67 IOPS, 36.85 MiB/s [2024-11-06T14:41:31.483Z] [2024-11-06 15:41:31.325798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.500 [2024-11-06 15:41:31.325808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.500 [2024-11-06 15:41:31.325816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.500 [2024-11-06 15:41:31.325825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.500 [2024-11-06 15:41:31.336665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.500 [2024-11-06 15:41:31.337299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-11-06 15:41:31.337357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.500 [2024-11-06 15:41:31.337369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.500 [2024-11-06 15:41:31.337623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.500 [2024-11-06 15:41:31.337869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.500 [2024-11-06 15:41:31.337880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.500 [2024-11-06 15:41:31.337888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.500 [2024-11-06 15:41:31.337897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.500 [2024-11-06 15:41:31.350670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.500 [2024-11-06 15:41:31.351370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.500 [2024-11-06 15:41:31.351431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.500 [2024-11-06 15:41:31.351443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.500 [2024-11-06 15:41:31.351698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.500 [2024-11-06 15:41:31.351939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.500 [2024-11-06 15:41:31.351948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.501 [2024-11-06 15:41:31.351957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.501 [2024-11-06 15:41:31.351966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.501 [2024-11-06 15:41:31.364666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.501 [2024-11-06 15:41:31.365366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-11-06 15:41:31.365428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.501 [2024-11-06 15:41:31.365441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.501 [2024-11-06 15:41:31.365694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.501 [2024-11-06 15:41:31.365935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.501 [2024-11-06 15:41:31.365945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.501 [2024-11-06 15:41:31.365953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.501 [2024-11-06 15:41:31.365962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.501 [2024-11-06 15:41:31.378455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.501 [2024-11-06 15:41:31.379202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-11-06 15:41:31.379263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.501 [2024-11-06 15:41:31.379275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.501 [2024-11-06 15:41:31.379529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.501 [2024-11-06 15:41:31.379769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.501 [2024-11-06 15:41:31.379779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.501 [2024-11-06 15:41:31.379794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.501 [2024-11-06 15:41:31.379803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.501 [2024-11-06 15:41:31.392319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.501 [2024-11-06 15:41:31.392937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-11-06 15:41:31.392969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.501 [2024-11-06 15:41:31.392978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.501 [2024-11-06 15:41:31.393202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.501 [2024-11-06 15:41:31.393422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.501 [2024-11-06 15:41:31.393431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.501 [2024-11-06 15:41:31.393439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.501 [2024-11-06 15:41:31.393446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.501 [2024-11-06 15:41:31.406139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.501 [2024-11-06 15:41:31.406701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-11-06 15:41:31.406725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.501 [2024-11-06 15:41:31.406734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.501 [2024-11-06 15:41:31.406963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.501 [2024-11-06 15:41:31.407183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.501 [2024-11-06 15:41:31.407193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.501 [2024-11-06 15:41:31.407200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.501 [2024-11-06 15:41:31.407208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.501 [2024-11-06 15:41:31.420092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.501 [2024-11-06 15:41:31.420655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-11-06 15:41:31.420678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.501 [2024-11-06 15:41:31.420687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.501 [2024-11-06 15:41:31.420984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.501 [2024-11-06 15:41:31.421206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.501 [2024-11-06 15:41:31.421214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.501 [2024-11-06 15:41:31.421221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.501 [2024-11-06 15:41:31.421229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.501 [2024-11-06 15:41:31.433952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.501 [2024-11-06 15:41:31.434663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-11-06 15:41:31.434724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.501 [2024-11-06 15:41:31.434737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.501 [2024-11-06 15:41:31.435008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.501 [2024-11-06 15:41:31.435235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.501 [2024-11-06 15:41:31.435244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.501 [2024-11-06 15:41:31.435252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.501 [2024-11-06 15:41:31.435260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.501 [2024-11-06 15:41:31.447777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.501 [2024-11-06 15:41:31.448531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-11-06 15:41:31.448592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.501 [2024-11-06 15:41:31.448604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.501 [2024-11-06 15:41:31.448871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.501 [2024-11-06 15:41:31.449098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.501 [2024-11-06 15:41:31.449107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.501 [2024-11-06 15:41:31.449115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.501 [2024-11-06 15:41:31.449124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.501 [2024-11-06 15:41:31.461651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.501 [2024-11-06 15:41:31.462399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-11-06 15:41:31.462460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.501 [2024-11-06 15:41:31.462473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.501 [2024-11-06 15:41:31.462726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.501 [2024-11-06 15:41:31.462968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.501 [2024-11-06 15:41:31.462978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.501 [2024-11-06 15:41:31.462986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.501 [2024-11-06 15:41:31.462995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.501 [2024-11-06 15:41:31.475485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.501 [2024-11-06 15:41:31.476128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.501 [2024-11-06 15:41:31.476159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.501 [2024-11-06 15:41:31.476175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.501 [2024-11-06 15:41:31.476397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.501 [2024-11-06 15:41:31.476617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.501 [2024-11-06 15:41:31.476625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.501 [2024-11-06 15:41:31.476633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.502 [2024-11-06 15:41:31.476641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.764 [2024-11-06 15:41:31.489356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.764 [2024-11-06 15:41:31.490063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.764 [2024-11-06 15:41:31.490124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.764 [2024-11-06 15:41:31.490138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.764 [2024-11-06 15:41:31.490392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.764 [2024-11-06 15:41:31.490617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.764 [2024-11-06 15:41:31.490626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.764 [2024-11-06 15:41:31.490634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.764 [2024-11-06 15:41:31.490644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.764 [2024-11-06 15:41:31.503146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.764 [2024-11-06 15:41:31.503824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.764 [2024-11-06 15:41:31.503886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.764 [2024-11-06 15:41:31.503899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.764 [2024-11-06 15:41:31.504153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.764 [2024-11-06 15:41:31.504379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.764 [2024-11-06 15:41:31.504388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.765 [2024-11-06 15:41:31.504396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.765 [2024-11-06 15:41:31.504405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.765 [2024-11-06 15:41:31.517115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.765 [2024-11-06 15:41:31.517762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.765 [2024-11-06 15:41:31.517792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.765 [2024-11-06 15:41:31.517801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.765 [2024-11-06 15:41:31.518023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.765 [2024-11-06 15:41:31.518259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.765 [2024-11-06 15:41:31.518268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.765 [2024-11-06 15:41:31.518276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.765 [2024-11-06 15:41:31.518283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.765 [2024-11-06 15:41:31.531011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.765 [2024-11-06 15:41:31.531604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.765 [2024-11-06 15:41:31.531631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.765 [2024-11-06 15:41:31.531640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.765 [2024-11-06 15:41:31.531871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.765 [2024-11-06 15:41:31.532091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.765 [2024-11-06 15:41:31.532101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.765 [2024-11-06 15:41:31.532109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.765 [2024-11-06 15:41:31.532116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.765 [2024-11-06 15:41:31.544854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.765 [2024-11-06 15:41:31.545472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.765 [2024-11-06 15:41:31.545496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.765 [2024-11-06 15:41:31.545504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.765 [2024-11-06 15:41:31.545723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.765 [2024-11-06 15:41:31.545954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.765 [2024-11-06 15:41:31.545964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.765 [2024-11-06 15:41:31.545972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.765 [2024-11-06 15:41:31.545979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.765 [2024-11-06 15:41:31.558769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.765 [2024-11-06 15:41:31.559271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.765 [2024-11-06 15:41:31.559301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.765 [2024-11-06 15:41:31.559310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.765 [2024-11-06 15:41:31.559533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.765 [2024-11-06 15:41:31.559765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.765 [2024-11-06 15:41:31.559775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.765 [2024-11-06 15:41:31.559783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.765 [2024-11-06 15:41:31.559797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.765 [2024-11-06 15:41:31.572736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.765 [2024-11-06 15:41:31.573308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.765 [2024-11-06 15:41:31.573332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.765 [2024-11-06 15:41:31.573341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.765 [2024-11-06 15:41:31.573561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.765 [2024-11-06 15:41:31.573790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.765 [2024-11-06 15:41:31.573800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.765 [2024-11-06 15:41:31.573807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.765 [2024-11-06 15:41:31.573815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.765 [2024-11-06 15:41:31.586539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.765 [2024-11-06 15:41:31.587183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.765 [2024-11-06 15:41:31.587209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.765 [2024-11-06 15:41:31.587217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.765 [2024-11-06 15:41:31.587437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.765 [2024-11-06 15:41:31.587657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.765 [2024-11-06 15:41:31.587665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.765 [2024-11-06 15:41:31.587673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.765 [2024-11-06 15:41:31.587681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.765 [2024-11-06 15:41:31.600409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.765 [2024-11-06 15:41:31.600889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.765 [2024-11-06 15:41:31.600917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.765 [2024-11-06 15:41:31.600926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.765 [2024-11-06 15:41:31.601148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.765 [2024-11-06 15:41:31.601368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.765 [2024-11-06 15:41:31.601377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.765 [2024-11-06 15:41:31.601385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.765 [2024-11-06 15:41:31.601392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.765 [2024-11-06 15:41:31.614325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.765 [2024-11-06 15:41:31.614898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.765 [2024-11-06 15:41:31.614921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.765 [2024-11-06 15:41:31.614930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.765 [2024-11-06 15:41:31.615151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.765 [2024-11-06 15:41:31.615371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.765 [2024-11-06 15:41:31.615380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.765 [2024-11-06 15:41:31.615388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.765 [2024-11-06 15:41:31.615396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.765 [2024-11-06 15:41:31.628142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.765 [2024-11-06 15:41:31.628801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.765 [2024-11-06 15:41:31.628862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.765 [2024-11-06 15:41:31.628875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.765 [2024-11-06 15:41:31.629128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.765 [2024-11-06 15:41:31.629354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.765 [2024-11-06 15:41:31.629363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.766 [2024-11-06 15:41:31.629371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.766 [2024-11-06 15:41:31.629381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.766 [2024-11-06 15:41:31.642117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.766 [2024-11-06 15:41:31.642812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.766 [2024-11-06 15:41:31.642874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.766 [2024-11-06 15:41:31.642887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.766 [2024-11-06 15:41:31.643141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.766 [2024-11-06 15:41:31.643367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.766 [2024-11-06 15:41:31.643376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.766 [2024-11-06 15:41:31.643384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.766 [2024-11-06 15:41:31.643393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.766 [2024-11-06 15:41:31.656110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.766 [2024-11-06 15:41:31.656766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.766 [2024-11-06 15:41:31.656797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.766 [2024-11-06 15:41:31.656807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.766 [2024-11-06 15:41:31.657038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.766 [2024-11-06 15:41:31.657261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.766 [2024-11-06 15:41:31.657271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.766 [2024-11-06 15:41:31.657279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.766 [2024-11-06 15:41:31.657286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.766 [2024-11-06 15:41:31.669985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.766 [2024-11-06 15:41:31.670667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.766 [2024-11-06 15:41:31.670727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.766 [2024-11-06 15:41:31.670740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.766 [2024-11-06 15:41:31.671006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.766 [2024-11-06 15:41:31.671232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.766 [2024-11-06 15:41:31.671241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.766 [2024-11-06 15:41:31.671250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.766 [2024-11-06 15:41:31.671259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.766 [2024-11-06 15:41:31.683803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.766 [2024-11-06 15:41:31.684440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.766 [2024-11-06 15:41:31.684468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.766 [2024-11-06 15:41:31.684477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.766 [2024-11-06 15:41:31.684698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.766 [2024-11-06 15:41:31.684929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.766 [2024-11-06 15:41:31.684940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.766 [2024-11-06 15:41:31.684948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.766 [2024-11-06 15:41:31.684957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.766 [2024-11-06 15:41:31.697668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.766 [2024-11-06 15:41:31.698380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.766 [2024-11-06 15:41:31.698442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.766 [2024-11-06 15:41:31.698455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.766 [2024-11-06 15:41:31.698709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.766 [2024-11-06 15:41:31.698952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.766 [2024-11-06 15:41:31.698969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.766 [2024-11-06 15:41:31.698978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.766 [2024-11-06 15:41:31.698986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.766 [2024-11-06 15:41:31.711530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.766 [2024-11-06 15:41:31.712184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.766 [2024-11-06 15:41:31.712214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.766 [2024-11-06 15:41:31.712223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.766 [2024-11-06 15:41:31.712444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.766 [2024-11-06 15:41:31.712664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.766 [2024-11-06 15:41:31.712673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.766 [2024-11-06 15:41:31.712681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.766 [2024-11-06 15:41:31.712688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.766 [2024-11-06 15:41:31.725473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.766 [2024-11-06 15:41:31.726095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.766 [2024-11-06 15:41:31.726122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.766 [2024-11-06 15:41:31.726130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.766 [2024-11-06 15:41:31.726351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.766 [2024-11-06 15:41:31.726571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.766 [2024-11-06 15:41:31.726580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.766 [2024-11-06 15:41:31.726588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.766 [2024-11-06 15:41:31.726595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.766 [2024-11-06 15:41:31.739330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.766 [2024-11-06 15:41:31.740049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.766 [2024-11-06 15:41:31.740109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:13.766 [2024-11-06 15:41:31.740121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:13.766 [2024-11-06 15:41:31.740375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:13.766 [2024-11-06 15:41:31.740601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.766 [2024-11-06 15:41:31.740611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.766 [2024-11-06 15:41:31.740620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.766 [2024-11-06 15:41:31.740636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.029 [2024-11-06 15:41:31.753195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.030 [2024-11-06 15:41:31.753852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.030 [2024-11-06 15:41:31.753898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.030 [2024-11-06 15:41:31.753908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.030 [2024-11-06 15:41:31.754147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.030 [2024-11-06 15:41:31.754369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.030 [2024-11-06 15:41:31.754377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.030 [2024-11-06 15:41:31.754385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.030 [2024-11-06 15:41:31.754392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.030 [2024-11-06 15:41:31.765943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.030 [2024-11-06 15:41:31.766487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.030 [2024-11-06 15:41:31.766509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.030 [2024-11-06 15:41:31.766515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.030 [2024-11-06 15:41:31.766669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.030 [2024-11-06 15:41:31.766831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.030 [2024-11-06 15:41:31.766837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.030 [2024-11-06 15:41:31.766844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.030 [2024-11-06 15:41:31.766850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.030 [2024-11-06 15:41:31.778583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.030 [2024-11-06 15:41:31.779116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.030 [2024-11-06 15:41:31.779135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.030 [2024-11-06 15:41:31.779141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.030 [2024-11-06 15:41:31.779292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.030 [2024-11-06 15:41:31.779444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.030 [2024-11-06 15:41:31.779451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.030 [2024-11-06 15:41:31.779456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.030 [2024-11-06 15:41:31.779461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.030 [2024-11-06 15:41:31.791328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.030 [2024-11-06 15:41:31.791806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.030 [2024-11-06 15:41:31.791823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.030 [2024-11-06 15:41:31.791829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.030 [2024-11-06 15:41:31.791981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.030 [2024-11-06 15:41:31.792132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.030 [2024-11-06 15:41:31.792138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.030 [2024-11-06 15:41:31.792143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.030 [2024-11-06 15:41:31.792148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.030 [2024-11-06 15:41:31.804017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.030 [2024-11-06 15:41:31.804520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.030 [2024-11-06 15:41:31.804536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.030 [2024-11-06 15:41:31.804541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.030 [2024-11-06 15:41:31.804692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.030 [2024-11-06 15:41:31.804852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.030 [2024-11-06 15:41:31.804859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.030 [2024-11-06 15:41:31.804864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.030 [2024-11-06 15:41:31.804869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.030 [2024-11-06 15:41:31.816718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.030 [2024-11-06 15:41:31.817224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.030 [2024-11-06 15:41:31.817240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.030 [2024-11-06 15:41:31.817246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.030 [2024-11-06 15:41:31.817397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.030 [2024-11-06 15:41:31.817547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.030 [2024-11-06 15:41:31.817553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.030 [2024-11-06 15:41:31.817559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.030 [2024-11-06 15:41:31.817565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.030 [2024-11-06 15:41:31.829447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.030 [2024-11-06 15:41:31.830052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.030 [2024-11-06 15:41:31.830089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.030 [2024-11-06 15:41:31.830097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.030 [2024-11-06 15:41:31.830272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.030 [2024-11-06 15:41:31.830427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.030 [2024-11-06 15:41:31.830433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.030 [2024-11-06 15:41:31.830439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.030 [2024-11-06 15:41:31.830445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.030 [2024-11-06 15:41:31.842145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.030 [2024-11-06 15:41:31.842726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.030 [2024-11-06 15:41:31.842767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.030 [2024-11-06 15:41:31.842775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.030 [2024-11-06 15:41:31.842944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.030 [2024-11-06 15:41:31.843098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.030 [2024-11-06 15:41:31.843104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.030 [2024-11-06 15:41:31.843109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.030 [2024-11-06 15:41:31.843115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.030 [2024-11-06 15:41:31.854805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.030 [2024-11-06 15:41:31.855307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.030 [2024-11-06 15:41:31.855324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.030 [2024-11-06 15:41:31.855329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.030 [2024-11-06 15:41:31.855480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.030 [2024-11-06 15:41:31.855630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.031 [2024-11-06 15:41:31.855635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.031 [2024-11-06 15:41:31.855640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.031 [2024-11-06 15:41:31.855645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.031 [2024-11-06 15:41:31.867461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.031 [2024-11-06 15:41:31.868058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.031 [2024-11-06 15:41:31.868090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.031 [2024-11-06 15:41:31.868098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.031 [2024-11-06 15:41:31.868265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.031 [2024-11-06 15:41:31.868418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.031 [2024-11-06 15:41:31.868428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.031 [2024-11-06 15:41:31.868434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.031 [2024-11-06 15:41:31.868440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.031 [2024-11-06 15:41:31.880129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.031 [2024-11-06 15:41:31.880486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.031 [2024-11-06 15:41:31.880502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.031 [2024-11-06 15:41:31.880508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.031 [2024-11-06 15:41:31.880660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.031 [2024-11-06 15:41:31.880817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.031 [2024-11-06 15:41:31.880823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.031 [2024-11-06 15:41:31.880828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.031 [2024-11-06 15:41:31.880833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.031 [2024-11-06 15:41:31.892787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.031 [2024-11-06 15:41:31.893336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.031 [2024-11-06 15:41:31.893366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.031 [2024-11-06 15:41:31.893375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.031 [2024-11-06 15:41:31.893541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.031 [2024-11-06 15:41:31.893694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.031 [2024-11-06 15:41:31.893700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.031 [2024-11-06 15:41:31.893705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.031 [2024-11-06 15:41:31.893710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.031 [2024-11-06 15:41:31.905603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.031 [2024-11-06 15:41:31.906191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.031 [2024-11-06 15:41:31.906222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.031 [2024-11-06 15:41:31.906230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.031 [2024-11-06 15:41:31.906396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.031 [2024-11-06 15:41:31.906549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.031 [2024-11-06 15:41:31.906555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.031 [2024-11-06 15:41:31.906560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.031 [2024-11-06 15:41:31.906569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.031 [2024-11-06 15:41:31.918250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.031 [2024-11-06 15:41:31.918695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.031 [2024-11-06 15:41:31.918709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.031 [2024-11-06 15:41:31.918715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.031 [2024-11-06 15:41:31.918869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.031 [2024-11-06 15:41:31.919020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.031 [2024-11-06 15:41:31.919025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.031 [2024-11-06 15:41:31.919030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.031 [2024-11-06 15:41:31.919035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.031 [2024-11-06 15:41:31.930857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.031 [2024-11-06 15:41:31.931222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.031 [2024-11-06 15:41:31.931235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.031 [2024-11-06 15:41:31.931240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.031 [2024-11-06 15:41:31.931390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.031 [2024-11-06 15:41:31.931539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.031 [2024-11-06 15:41:31.931545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.031 [2024-11-06 15:41:31.931550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.031 [2024-11-06 15:41:31.931555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.031 [2024-11-06 15:41:31.943504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.031 [2024-11-06 15:41:31.943962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.031 [2024-11-06 15:41:31.943975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.031 [2024-11-06 15:41:31.943980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.031 [2024-11-06 15:41:31.944130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.031 [2024-11-06 15:41:31.944280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.031 [2024-11-06 15:41:31.944286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.031 [2024-11-06 15:41:31.944291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.031 [2024-11-06 15:41:31.944295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.031 [2024-11-06 15:41:31.956101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.031 [2024-11-06 15:41:31.956581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.031 [2024-11-06 15:41:31.956596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.031 [2024-11-06 15:41:31.956602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.031 [2024-11-06 15:41:31.956755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.031 [2024-11-06 15:41:31.956905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.031 [2024-11-06 15:41:31.956911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.031 [2024-11-06 15:41:31.956916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.031 [2024-11-06 15:41:31.956920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.031 [2024-11-06 15:41:31.968718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.032 [2024-11-06 15:41:31.969173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.032 [2024-11-06 15:41:31.969186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.032 [2024-11-06 15:41:31.969191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.032 [2024-11-06 15:41:31.969340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.032 [2024-11-06 15:41:31.969489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.032 [2024-11-06 15:41:31.969495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.032 [2024-11-06 15:41:31.969500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.032 [2024-11-06 15:41:31.969504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.032 [2024-11-06 15:41:31.981339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.032 [2024-11-06 15:41:31.981853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.032 [2024-11-06 15:41:31.981883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.032 [2024-11-06 15:41:31.981892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.032 [2024-11-06 15:41:31.982060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.032 [2024-11-06 15:41:31.982212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.032 [2024-11-06 15:41:31.982218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.032 [2024-11-06 15:41:31.982224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.032 [2024-11-06 15:41:31.982229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.032 [2024-11-06 15:41:31.994046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.032 [2024-11-06 15:41:31.994538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.032 [2024-11-06 15:41:31.994553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.032 [2024-11-06 15:41:31.994558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.032 [2024-11-06 15:41:31.994712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.032 [2024-11-06 15:41:31.994866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.032 [2024-11-06 15:41:31.994872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.032 [2024-11-06 15:41:31.994877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.032 [2024-11-06 15:41:31.994882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.032 [2024-11-06 15:41:32.006695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.032 [2024-11-06 15:41:32.007265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.032 [2024-11-06 15:41:32.007295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.032 [2024-11-06 15:41:32.007304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.032 [2024-11-06 15:41:32.007469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.032 [2024-11-06 15:41:32.007622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.032 [2024-11-06 15:41:32.007628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.032 [2024-11-06 15:41:32.007633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.032 [2024-11-06 15:41:32.007639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.296 [2024-11-06 15:41:32.019317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.296 [2024-11-06 15:41:32.019818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.296 [2024-11-06 15:41:32.019833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.296 [2024-11-06 15:41:32.019838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.296 [2024-11-06 15:41:32.019989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.296 [2024-11-06 15:41:32.020139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.296 [2024-11-06 15:41:32.020144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.296 [2024-11-06 15:41:32.020149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.296 [2024-11-06 15:41:32.020154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.296 [2024-11-06 15:41:32.031979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.296 [2024-11-06 15:41:32.032461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.296 [2024-11-06 15:41:32.032475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.296 [2024-11-06 15:41:32.032480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.296 [2024-11-06 15:41:32.032630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.296 [2024-11-06 15:41:32.032783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.296 [2024-11-06 15:41:32.032794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.296 [2024-11-06 15:41:32.032799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.296 [2024-11-06 15:41:32.032803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.296 [2024-11-06 15:41:32.044607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.296 [2024-11-06 15:41:32.045173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.296 [2024-11-06 15:41:32.045203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.296 [2024-11-06 15:41:32.045211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.296 [2024-11-06 15:41:32.045377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.296 [2024-11-06 15:41:32.045530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.296 [2024-11-06 15:41:32.045536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.296 [2024-11-06 15:41:32.045541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.296 [2024-11-06 15:41:32.045547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.296 [2024-11-06 15:41:32.057223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.296 [2024-11-06 15:41:32.057776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.296 [2024-11-06 15:41:32.057807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.296 [2024-11-06 15:41:32.057816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.296 [2024-11-06 15:41:32.057981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.296 [2024-11-06 15:41:32.058134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.296 [2024-11-06 15:41:32.058140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.296 [2024-11-06 15:41:32.058145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.296 [2024-11-06 15:41:32.058150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.296 [2024-11-06 15:41:32.069829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.296 [2024-11-06 15:41:32.070310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.296 [2024-11-06 15:41:32.070339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.296 [2024-11-06 15:41:32.070347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.296 [2024-11-06 15:41:32.070513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.296 [2024-11-06 15:41:32.070666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.296 [2024-11-06 15:41:32.070672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.296 [2024-11-06 15:41:32.070677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.296 [2024-11-06 15:41:32.070683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.296 [2024-11-06 15:41:32.082517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.296 [2024-11-06 15:41:32.083097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.296 [2024-11-06 15:41:32.083128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.296 [2024-11-06 15:41:32.083137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.296 [2024-11-06 15:41:32.083303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.296 [2024-11-06 15:41:32.083456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.296 [2024-11-06 15:41:32.083462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.296 [2024-11-06 15:41:32.083467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.296 [2024-11-06 15:41:32.083473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.296 [2024-11-06 15:41:32.095166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.296 [2024-11-06 15:41:32.095786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.296 [2024-11-06 15:41:32.095816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.296 [2024-11-06 15:41:32.095824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.296 [2024-11-06 15:41:32.095992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.296 [2024-11-06 15:41:32.096145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.296 [2024-11-06 15:41:32.096151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.297 [2024-11-06 15:41:32.096156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.297 [2024-11-06 15:41:32.096162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.297 [2024-11-06 15:41:32.107850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.297 [2024-11-06 15:41:32.108397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.297 [2024-11-06 15:41:32.108427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.297 [2024-11-06 15:41:32.108436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.297 [2024-11-06 15:41:32.108601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.297 [2024-11-06 15:41:32.108761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.297 [2024-11-06 15:41:32.108768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.297 [2024-11-06 15:41:32.108774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.297 [2024-11-06 15:41:32.108779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.297 [2024-11-06 15:41:32.120460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.297 [2024-11-06 15:41:32.121045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.297 [2024-11-06 15:41:32.121079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.297 [2024-11-06 15:41:32.121088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.297 [2024-11-06 15:41:32.121255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.297 [2024-11-06 15:41:32.121408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.297 [2024-11-06 15:41:32.121414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.297 [2024-11-06 15:41:32.121419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.297 [2024-11-06 15:41:32.121425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.297 [2024-11-06 15:41:32.133121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.297 [2024-11-06 15:41:32.134057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.297 [2024-11-06 15:41:32.134075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.297 [2024-11-06 15:41:32.134082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.297 [2024-11-06 15:41:32.134239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.297 [2024-11-06 15:41:32.134390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.297 [2024-11-06 15:41:32.134396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.297 [2024-11-06 15:41:32.134401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.297 [2024-11-06 15:41:32.134407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.297 [2024-11-06 15:41:32.145814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.297 [2024-11-06 15:41:32.146390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.297 [2024-11-06 15:41:32.146420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.297 [2024-11-06 15:41:32.146429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.297 [2024-11-06 15:41:32.146594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.297 [2024-11-06 15:41:32.146754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.297 [2024-11-06 15:41:32.146762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.297 [2024-11-06 15:41:32.146767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.297 [2024-11-06 15:41:32.146773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.297 [2024-11-06 15:41:32.158449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.297 [2024-11-06 15:41:32.159089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.297 [2024-11-06 15:41:32.159120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.297 [2024-11-06 15:41:32.159128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.297 [2024-11-06 15:41:32.159297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.297 [2024-11-06 15:41:32.159449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.297 [2024-11-06 15:41:32.159455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.297 [2024-11-06 15:41:32.159460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.297 [2024-11-06 15:41:32.159466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.297 [2024-11-06 15:41:32.171142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.297 [2024-11-06 15:41:32.171717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.297 [2024-11-06 15:41:32.171752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.297 [2024-11-06 15:41:32.171762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.297 [2024-11-06 15:41:32.171929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.297 [2024-11-06 15:41:32.172082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.297 [2024-11-06 15:41:32.172089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.297 [2024-11-06 15:41:32.172094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.297 [2024-11-06 15:41:32.172100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.297 [2024-11-06 15:41:32.183812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.297 [2024-11-06 15:41:32.184360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.297 [2024-11-06 15:41:32.184390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.297 [2024-11-06 15:41:32.184399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.297 [2024-11-06 15:41:32.184564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.297 [2024-11-06 15:41:32.184717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.297 [2024-11-06 15:41:32.184724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.297 [2024-11-06 15:41:32.184729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.297 [2024-11-06 15:41:32.184735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.297 [2024-11-06 15:41:32.196413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.297 [2024-11-06 15:41:32.197041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.297 [2024-11-06 15:41:32.197072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.297 [2024-11-06 15:41:32.197080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.297 [2024-11-06 15:41:32.197246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.297 [2024-11-06 15:41:32.197399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.297 [2024-11-06 15:41:32.197405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.297 [2024-11-06 15:41:32.197414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.297 [2024-11-06 15:41:32.197420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.297 [2024-11-06 15:41:32.209113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.297 [2024-11-06 15:41:32.209602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.298 [2024-11-06 15:41:32.209617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.298 [2024-11-06 15:41:32.209622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.298 [2024-11-06 15:41:32.209776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.298 [2024-11-06 15:41:32.209927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.298 [2024-11-06 15:41:32.209932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.298 [2024-11-06 15:41:32.209937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.298 [2024-11-06 15:41:32.209942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.298 [2024-11-06 15:41:32.221749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.298 [2024-11-06 15:41:32.222117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.298 [2024-11-06 15:41:32.222130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.298 [2024-11-06 15:41:32.222135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.298 [2024-11-06 15:41:32.222285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.298 [2024-11-06 15:41:32.222434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.298 [2024-11-06 15:41:32.222440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.298 [2024-11-06 15:41:32.222444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.298 [2024-11-06 15:41:32.222449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.298 [2024-11-06 15:41:32.234433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.298 [2024-11-06 15:41:32.234985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.298 [2024-11-06 15:41:32.235015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.298 [2024-11-06 15:41:32.235023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.298 [2024-11-06 15:41:32.235189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.298 [2024-11-06 15:41:32.235342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.298 [2024-11-06 15:41:32.235347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.298 [2024-11-06 15:41:32.235353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.298 [2024-11-06 15:41:32.235359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.298 [2024-11-06 15:41:32.247052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.298 [2024-11-06 15:41:32.247621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.298 [2024-11-06 15:41:32.247651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.298 [2024-11-06 15:41:32.247659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.298 [2024-11-06 15:41:32.247831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.298 [2024-11-06 15:41:32.247984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.298 [2024-11-06 15:41:32.247990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.298 [2024-11-06 15:41:32.247996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.298 [2024-11-06 15:41:32.248001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.298 [2024-11-06 15:41:32.259671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.298 [2024-11-06 15:41:32.260168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.298 [2024-11-06 15:41:32.260184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.298 [2024-11-06 15:41:32.260189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.298 [2024-11-06 15:41:32.260339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.298 [2024-11-06 15:41:32.260489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.298 [2024-11-06 15:41:32.260495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.298 [2024-11-06 15:41:32.260500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.298 [2024-11-06 15:41:32.260505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.298 [2024-11-06 15:41:32.272313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.298 [2024-11-06 15:41:32.272667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.298 [2024-11-06 15:41:32.272681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.298 [2024-11-06 15:41:32.272687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.298 [2024-11-06 15:41:32.272840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.298 [2024-11-06 15:41:32.272991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.298 [2024-11-06 15:41:32.272997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.298 [2024-11-06 15:41:32.273002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.298 [2024-11-06 15:41:32.273006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.561 [2024-11-06 15:41:32.284956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.561 [2024-11-06 15:41:32.285433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.561 [2024-11-06 15:41:32.285445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.561 [2024-11-06 15:41:32.285454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.561 [2024-11-06 15:41:32.285604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.561 [2024-11-06 15:41:32.285757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.561 [2024-11-06 15:41:32.285763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.561 [2024-11-06 15:41:32.285767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.561 [2024-11-06 15:41:32.285772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.561 [2024-11-06 15:41:32.297576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.561 [2024-11-06 15:41:32.298073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.561 [2024-11-06 15:41:32.298086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.561 [2024-11-06 15:41:32.298091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.561 [2024-11-06 15:41:32.298240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.561 [2024-11-06 15:41:32.298390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.561 [2024-11-06 15:41:32.298395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.561 [2024-11-06 15:41:32.298400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.561 [2024-11-06 15:41:32.298404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.561 [2024-11-06 15:41:32.310211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.561 [2024-11-06 15:41:32.310701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.561 [2024-11-06 15:41:32.310713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.561 [2024-11-06 15:41:32.310718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.561 [2024-11-06 15:41:32.310870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.561 [2024-11-06 15:41:32.311020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.561 [2024-11-06 15:41:32.311026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.561 [2024-11-06 15:41:32.311030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.561 [2024-11-06 15:41:32.311035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.561 [2024-11-06 15:41:32.322854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.561 [2024-11-06 15:41:32.323424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.561 [2024-11-06 15:41:32.323454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.561 [2024-11-06 15:41:32.323462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.562 [2024-11-06 15:41:32.323628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.562 [2024-11-06 15:41:32.323790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.562 [2024-11-06 15:41:32.323797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.562 [2024-11-06 15:41:32.323802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.562 [2024-11-06 15:41:32.323808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.562 7076.00 IOPS, 27.64 MiB/s [2024-11-06T14:41:32.545Z] [2024-11-06 15:41:32.335490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.562 [2024-11-06 15:41:32.335953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.562 [2024-11-06 15:41:32.335969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.562 [2024-11-06 15:41:32.335975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.562 [2024-11-06 15:41:32.336125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.562 [2024-11-06 15:41:32.336275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.562 [2024-11-06 15:41:32.336280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.562 [2024-11-06 15:41:32.336285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.562 [2024-11-06 15:41:32.336290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.562 [2024-11-06 15:41:32.348096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.562 [2024-11-06 15:41:32.348586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.562 [2024-11-06 15:41:32.348599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.562 [2024-11-06 15:41:32.348605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.562 [2024-11-06 15:41:32.348758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.562 [2024-11-06 15:41:32.348910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.562 [2024-11-06 15:41:32.348916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.562 [2024-11-06 15:41:32.348922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.562 [2024-11-06 15:41:32.348927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.562 [2024-11-06 15:41:32.360725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.562 [2024-11-06 15:41:32.361189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.562 [2024-11-06 15:41:32.361201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.562 [2024-11-06 15:41:32.361206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.562 [2024-11-06 15:41:32.361356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.562 [2024-11-06 15:41:32.361505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.562 [2024-11-06 15:41:32.361510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.562 [2024-11-06 15:41:32.361518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.562 [2024-11-06 15:41:32.361523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.562 [2024-11-06 15:41:32.373321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.562 [2024-11-06 15:41:32.373909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.562 [2024-11-06 15:41:32.373939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.562 [2024-11-06 15:41:32.373948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.562 [2024-11-06 15:41:32.374113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.562 [2024-11-06 15:41:32.374266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.562 [2024-11-06 15:41:32.374272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.562 [2024-11-06 15:41:32.374277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.562 [2024-11-06 15:41:32.374283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.562 [2024-11-06 15:41:32.386008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.562 [2024-11-06 15:41:32.386499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.562 [2024-11-06 15:41:32.386514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.562 [2024-11-06 15:41:32.386520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.562 [2024-11-06 15:41:32.386670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.562 [2024-11-06 15:41:32.386827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.562 [2024-11-06 15:41:32.386834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.562 [2024-11-06 15:41:32.386840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.562 [2024-11-06 15:41:32.386845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.562 [2024-11-06 15:41:32.398689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.562 [2024-11-06 15:41:32.399239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.562 [2024-11-06 15:41:32.399269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.562 [2024-11-06 15:41:32.399277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.562 [2024-11-06 15:41:32.399442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.562 [2024-11-06 15:41:32.399595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.562 [2024-11-06 15:41:32.399601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.562 [2024-11-06 15:41:32.399607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.562 [2024-11-06 15:41:32.399612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.562 [2024-11-06 15:41:32.411293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.562 [2024-11-06 15:41:32.411862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.562 [2024-11-06 15:41:32.411892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.562 [2024-11-06 15:41:32.411900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.562 [2024-11-06 15:41:32.412066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.562 [2024-11-06 15:41:32.412218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.562 [2024-11-06 15:41:32.412224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.562 [2024-11-06 15:41:32.412230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.562 [2024-11-06 15:41:32.412235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.562 [2024-11-06 15:41:32.423904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.562 [2024-11-06 15:41:32.424493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.562 [2024-11-06 15:41:32.424523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.562 [2024-11-06 15:41:32.424531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.562 [2024-11-06 15:41:32.424696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.562 [2024-11-06 15:41:32.424861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.562 [2024-11-06 15:41:32.424868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.563 [2024-11-06 15:41:32.424873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.563 [2024-11-06 15:41:32.424879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.563 [2024-11-06 15:41:32.436552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.563 [2024-11-06 15:41:32.437107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.563 [2024-11-06 15:41:32.437137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.563 [2024-11-06 15:41:32.437146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.563 [2024-11-06 15:41:32.437312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.563 [2024-11-06 15:41:32.437464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.563 [2024-11-06 15:41:32.437470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.563 [2024-11-06 15:41:32.437476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.563 [2024-11-06 15:41:32.437481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.563 [2024-11-06 15:41:32.449177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.563 [2024-11-06 15:41:32.449754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.563 [2024-11-06 15:41:32.449784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.563 [2024-11-06 15:41:32.449796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.563 [2024-11-06 15:41:32.449961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.563 [2024-11-06 15:41:32.450114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.563 [2024-11-06 15:41:32.450120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.563 [2024-11-06 15:41:32.450125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.563 [2024-11-06 15:41:32.450131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.563 [2024-11-06 15:41:32.461832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.563 [2024-11-06 15:41:32.462417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.563 [2024-11-06 15:41:32.462447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.563 [2024-11-06 15:41:32.462456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.563 [2024-11-06 15:41:32.462621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.563 [2024-11-06 15:41:32.462779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.563 [2024-11-06 15:41:32.462786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.563 [2024-11-06 15:41:32.462791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.563 [2024-11-06 15:41:32.462797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.563 [2024-11-06 15:41:32.474478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.563 [2024-11-06 15:41:32.474971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.563 [2024-11-06 15:41:32.474987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.563 [2024-11-06 15:41:32.474993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.563 [2024-11-06 15:41:32.475143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.563 [2024-11-06 15:41:32.475293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.563 [2024-11-06 15:41:32.475299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.563 [2024-11-06 15:41:32.475304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.563 [2024-11-06 15:41:32.475309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.563 [2024-11-06 15:41:32.487134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.563 [2024-11-06 15:41:32.487623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.563 [2024-11-06 15:41:32.487636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.563 [2024-11-06 15:41:32.487641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.563 [2024-11-06 15:41:32.487795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.563 [2024-11-06 15:41:32.487949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.563 [2024-11-06 15:41:32.487954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.563 [2024-11-06 15:41:32.487959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.563 [2024-11-06 15:41:32.487964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.563 [2024-11-06 15:41:32.499784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.563 [2024-11-06 15:41:32.500253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.563 [2024-11-06 15:41:32.500266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.563 [2024-11-06 15:41:32.500271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.563 [2024-11-06 15:41:32.500420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.563 [2024-11-06 15:41:32.500570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.563 [2024-11-06 15:41:32.500575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.563 [2024-11-06 15:41:32.500580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.563 [2024-11-06 15:41:32.500585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.563 [2024-11-06 15:41:32.512401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.563 [2024-11-06 15:41:32.512785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.563 [2024-11-06 15:41:32.512798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.563 [2024-11-06 15:41:32.512803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.563 [2024-11-06 15:41:32.512953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.563 [2024-11-06 15:41:32.513103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.563 [2024-11-06 15:41:32.513108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.563 [2024-11-06 15:41:32.513113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.563 [2024-11-06 15:41:32.513118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.563 [2024-11-06 15:41:32.525084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.563 [2024-11-06 15:41:32.525574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.563 [2024-11-06 15:41:32.525586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.563 [2024-11-06 15:41:32.525591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.563 [2024-11-06 15:41:32.525741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.563 [2024-11-06 15:41:32.525896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.563 [2024-11-06 15:41:32.525903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.563 [2024-11-06 15:41:32.525910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.563 [2024-11-06 15:41:32.525915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.563 [2024-11-06 15:41:32.537740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.563 [2024-11-06 15:41:32.538199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.563 [2024-11-06 15:41:32.538212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.563 [2024-11-06 15:41:32.538217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.563 [2024-11-06 15:41:32.538366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.563 [2024-11-06 15:41:32.538516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.564 [2024-11-06 15:41:32.538521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.564 [2024-11-06 15:41:32.538526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.564 [2024-11-06 15:41:32.538531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.827 [2024-11-06 15:41:32.550354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.827 [2024-11-06 15:41:32.550840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.827 [2024-11-06 15:41:32.550871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.827 [2024-11-06 15:41:32.550880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.827 [2024-11-06 15:41:32.551048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.827 [2024-11-06 15:41:32.551201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.827 [2024-11-06 15:41:32.551207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.827 [2024-11-06 15:41:32.551213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.827 [2024-11-06 15:41:32.551219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.827 [2024-11-06 15:41:32.563048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.827 [2024-11-06 15:41:32.563629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.827 [2024-11-06 15:41:32.563659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.827 [2024-11-06 15:41:32.563667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.827 [2024-11-06 15:41:32.563840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.827 [2024-11-06 15:41:32.563994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.827 [2024-11-06 15:41:32.564000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.827 [2024-11-06 15:41:32.564005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.827 [2024-11-06 15:41:32.564010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.827 [2024-11-06 15:41:32.575690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.827 [2024-11-06 15:41:32.576170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.827 [2024-11-06 15:41:32.576199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.827 [2024-11-06 15:41:32.576208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.827 [2024-11-06 15:41:32.576374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.827 [2024-11-06 15:41:32.576526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.827 [2024-11-06 15:41:32.576532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.827 [2024-11-06 15:41:32.576537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.827 [2024-11-06 15:41:32.576543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.827 [2024-11-06 15:41:32.588360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.827 [2024-11-06 15:41:32.588845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.827 [2024-11-06 15:41:32.588860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.827 [2024-11-06 15:41:32.588866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.827 [2024-11-06 15:41:32.589016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.827 [2024-11-06 15:41:32.589166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.827 [2024-11-06 15:41:32.589172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.827 [2024-11-06 15:41:32.589177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.827 [2024-11-06 15:41:32.589181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.827 [2024-11-06 15:41:32.601013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.827 [2024-11-06 15:41:32.601506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.827 [2024-11-06 15:41:32.601520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.827 [2024-11-06 15:41:32.601525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.827 [2024-11-06 15:41:32.601675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.827 [2024-11-06 15:41:32.601830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.827 [2024-11-06 15:41:32.601836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.827 [2024-11-06 15:41:32.601841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.827 [2024-11-06 15:41:32.601845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.827 [2024-11-06 15:41:32.613651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.827 [2024-11-06 15:41:32.614257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.827 [2024-11-06 15:41:32.614287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.827 [2024-11-06 15:41:32.614299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.827 [2024-11-06 15:41:32.614464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.827 [2024-11-06 15:41:32.614617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.827 [2024-11-06 15:41:32.614623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.827 [2024-11-06 15:41:32.614628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.827 [2024-11-06 15:41:32.614634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.827 [2024-11-06 15:41:32.626317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.828 [2024-11-06 15:41:32.626847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.828 [2024-11-06 15:41:32.626878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.828 [2024-11-06 15:41:32.626886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.828 [2024-11-06 15:41:32.627053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.828 [2024-11-06 15:41:32.627206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.828 [2024-11-06 15:41:32.627212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.828 [2024-11-06 15:41:32.627217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.828 [2024-11-06 15:41:32.627223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.828 [2024-11-06 15:41:32.639039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.828 [2024-11-06 15:41:32.639622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.828 [2024-11-06 15:41:32.639652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.828 [2024-11-06 15:41:32.639660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.828 [2024-11-06 15:41:32.639832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.828 [2024-11-06 15:41:32.639987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.828 [2024-11-06 15:41:32.639993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.828 [2024-11-06 15:41:32.639998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.828 [2024-11-06 15:41:32.640003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.828 [2024-11-06 15:41:32.651675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.828 [2024-11-06 15:41:32.652287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.828 [2024-11-06 15:41:32.652317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.828 [2024-11-06 15:41:32.652325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.828 [2024-11-06 15:41:32.652491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.828 [2024-11-06 15:41:32.652647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.828 [2024-11-06 15:41:32.652653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.828 [2024-11-06 15:41:32.652659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.828 [2024-11-06 15:41:32.652664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.828 [2024-11-06 15:41:32.664336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.828 [2024-11-06 15:41:32.664954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.828 [2024-11-06 15:41:32.664984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.828 [2024-11-06 15:41:32.664993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.828 [2024-11-06 15:41:32.665158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.828 [2024-11-06 15:41:32.665311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.828 [2024-11-06 15:41:32.665317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.828 [2024-11-06 15:41:32.665323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.828 [2024-11-06 15:41:32.665328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.828 [2024-11-06 15:41:32.677016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.828 [2024-11-06 15:41:32.677583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.828 [2024-11-06 15:41:32.677613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.828 [2024-11-06 15:41:32.677621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.828 [2024-11-06 15:41:32.677795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.828 [2024-11-06 15:41:32.677948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.828 [2024-11-06 15:41:32.677954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.828 [2024-11-06 15:41:32.677960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.828 [2024-11-06 15:41:32.677965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.828 [2024-11-06 15:41:32.689640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.828 [2024-11-06 15:41:32.690243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.828 [2024-11-06 15:41:32.690273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.828 [2024-11-06 15:41:32.690282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.828 [2024-11-06 15:41:32.690449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.828 [2024-11-06 15:41:32.690603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.828 [2024-11-06 15:41:32.690608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.828 [2024-11-06 15:41:32.690618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.828 [2024-11-06 15:41:32.690623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.828 [2024-11-06 15:41:32.702297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.828 [2024-11-06 15:41:32.702841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.828 [2024-11-06 15:41:32.702872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.828 [2024-11-06 15:41:32.702881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.828 [2024-11-06 15:41:32.703046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.828 [2024-11-06 15:41:32.703198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.828 [2024-11-06 15:41:32.703205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.828 [2024-11-06 15:41:32.703210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.828 [2024-11-06 15:41:32.703215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.828 [2024-11-06 15:41:32.714889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.828 [2024-11-06 15:41:32.715456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.828 [2024-11-06 15:41:32.715485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.828 [2024-11-06 15:41:32.715494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.828 [2024-11-06 15:41:32.715660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.828 [2024-11-06 15:41:32.715819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.828 [2024-11-06 15:41:32.715826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.828 [2024-11-06 15:41:32.715831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.828 [2024-11-06 15:41:32.715837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.829 [2024-11-06 15:41:32.727522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.829 [2024-11-06 15:41:32.728098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.829 [2024-11-06 15:41:32.728128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.829 [2024-11-06 15:41:32.728136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.829 [2024-11-06 15:41:32.728309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.829 [2024-11-06 15:41:32.728462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.829 [2024-11-06 15:41:32.728468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.829 [2024-11-06 15:41:32.728474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.829 [2024-11-06 15:41:32.728479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.829 [2024-11-06 15:41:32.740164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.829 [2024-11-06 15:41:32.740744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.829 [2024-11-06 15:41:32.740779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.829 [2024-11-06 15:41:32.740787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.829 [2024-11-06 15:41:32.740952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.829 [2024-11-06 15:41:32.741105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.829 [2024-11-06 15:41:32.741111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.829 [2024-11-06 15:41:32.741116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.829 [2024-11-06 15:41:32.741122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.829 [2024-11-06 15:41:32.752789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.829 [2024-11-06 15:41:32.753331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.829 [2024-11-06 15:41:32.753361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.829 [2024-11-06 15:41:32.753369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.829 [2024-11-06 15:41:32.753535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.829 [2024-11-06 15:41:32.753688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.829 [2024-11-06 15:41:32.753693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.829 [2024-11-06 15:41:32.753699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.829 [2024-11-06 15:41:32.753704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.829 [2024-11-06 15:41:32.765592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.829 [2024-11-06 15:41:32.766179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.829 [2024-11-06 15:41:32.766209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.829 [2024-11-06 15:41:32.766218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.829 [2024-11-06 15:41:32.766383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.829 [2024-11-06 15:41:32.766535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.829 [2024-11-06 15:41:32.766542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.829 [2024-11-06 15:41:32.766547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.829 [2024-11-06 15:41:32.766552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.829 [2024-11-06 15:41:32.778222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.829 [2024-11-06 15:41:32.778753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.829 [2024-11-06 15:41:32.778782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.829 [2024-11-06 15:41:32.778794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.829 [2024-11-06 15:41:32.778962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.829 [2024-11-06 15:41:32.779115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.829 [2024-11-06 15:41:32.779120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.829 [2024-11-06 15:41:32.779126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.829 [2024-11-06 15:41:32.779132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.829 [2024-11-06 15:41:32.790942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.829 [2024-11-06 15:41:32.791510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.829 [2024-11-06 15:41:32.791540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.829 [2024-11-06 15:41:32.791548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.829 [2024-11-06 15:41:32.791714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.829 [2024-11-06 15:41:32.791875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.829 [2024-11-06 15:41:32.791882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.829 [2024-11-06 15:41:32.791887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.829 [2024-11-06 15:41:32.791893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.829 [2024-11-06 15:41:32.803562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.829 [2024-11-06 15:41:32.804058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.829 [2024-11-06 15:41:32.804073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:14.829 [2024-11-06 15:41:32.804079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:14.829 [2024-11-06 15:41:32.804229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:14.829 [2024-11-06 15:41:32.804379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.829 [2024-11-06 15:41:32.804384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.829 [2024-11-06 15:41:32.804389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.829 [2024-11-06 15:41:32.804394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.092 [2024-11-06 15:41:32.816226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.092 [2024-11-06 15:41:32.816684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.092 [2024-11-06 15:41:32.816697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.092 [2024-11-06 15:41:32.816702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.092 [2024-11-06 15:41:32.816858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.092 [2024-11-06 15:41:32.817012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.092 [2024-11-06 15:41:32.817018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.093 [2024-11-06 15:41:32.817023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.093 [2024-11-06 15:41:32.817027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.093 [2024-11-06 15:41:32.828871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.093 [2024-11-06 15:41:32.829356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.093 [2024-11-06 15:41:32.829369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.093 [2024-11-06 15:41:32.829374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.093 [2024-11-06 15:41:32.829524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.093 [2024-11-06 15:41:32.829674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.093 [2024-11-06 15:41:32.829680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.093 [2024-11-06 15:41:32.829685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.093 [2024-11-06 15:41:32.829690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.093 [2024-11-06 15:41:32.841498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.093 [2024-11-06 15:41:32.842067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.093 [2024-11-06 15:41:32.842097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.093 [2024-11-06 15:41:32.842105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.093 [2024-11-06 15:41:32.842270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.093 [2024-11-06 15:41:32.842423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.093 [2024-11-06 15:41:32.842429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.093 [2024-11-06 15:41:32.842435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.093 [2024-11-06 15:41:32.842440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.093 [2024-11-06 15:41:32.854138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.093 [2024-11-06 15:41:32.854627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.093 [2024-11-06 15:41:32.854642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.093 [2024-11-06 15:41:32.854647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.093 [2024-11-06 15:41:32.854802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.093 [2024-11-06 15:41:32.854953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.093 [2024-11-06 15:41:32.854958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.093 [2024-11-06 15:41:32.854964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.093 [2024-11-06 15:41:32.854972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.093 [2024-11-06 15:41:32.866774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.093 [2024-11-06 15:41:32.867255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.093 [2024-11-06 15:41:32.867268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.093 [2024-11-06 15:41:32.867274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.093 [2024-11-06 15:41:32.867424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.093 [2024-11-06 15:41:32.867573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.093 [2024-11-06 15:41:32.867578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.093 [2024-11-06 15:41:32.867583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.093 [2024-11-06 15:41:32.867588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.093 [2024-11-06 15:41:32.879388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.093 [2024-11-06 15:41:32.880023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.093 [2024-11-06 15:41:32.880053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.093 [2024-11-06 15:41:32.880062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.093 [2024-11-06 15:41:32.880227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.093 [2024-11-06 15:41:32.880380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.093 [2024-11-06 15:41:32.880386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.093 [2024-11-06 15:41:32.880391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.093 [2024-11-06 15:41:32.880397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.093 [2024-11-06 15:41:32.892068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.093 [2024-11-06 15:41:32.892636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.093 [2024-11-06 15:41:32.892666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.093 [2024-11-06 15:41:32.892675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.093 [2024-11-06 15:41:32.892848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.093 [2024-11-06 15:41:32.893001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.093 [2024-11-06 15:41:32.893008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.093 [2024-11-06 15:41:32.893014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.093 [2024-11-06 15:41:32.893019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.093 [2024-11-06 15:41:32.904687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.093 [2024-11-06 15:41:32.905287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.093 [2024-11-06 15:41:32.905317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.093 [2024-11-06 15:41:32.905326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.093 [2024-11-06 15:41:32.905494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.093 [2024-11-06 15:41:32.905647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.093 [2024-11-06 15:41:32.905653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.093 [2024-11-06 15:41:32.905660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.093 [2024-11-06 15:41:32.905666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.093 [2024-11-06 15:41:32.917343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.093 [2024-11-06 15:41:32.917952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.093 [2024-11-06 15:41:32.917982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.093 [2024-11-06 15:41:32.917990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.093 [2024-11-06 15:41:32.918156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.093 [2024-11-06 15:41:32.918309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.093 [2024-11-06 15:41:32.918314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.093 [2024-11-06 15:41:32.918320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.093 [2024-11-06 15:41:32.918325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.093 [2024-11-06 15:41:32.930025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.093 [2024-11-06 15:41:32.930600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.093 [2024-11-06 15:41:32.930630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.093 [2024-11-06 15:41:32.930638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.093 [2024-11-06 15:41:32.930814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.093 [2024-11-06 15:41:32.930967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.093 [2024-11-06 15:41:32.930973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.093 [2024-11-06 15:41:32.930978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.093 [2024-11-06 15:41:32.930984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.093 [2024-11-06 15:41:32.942707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.093 [2024-11-06 15:41:32.943276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.093 [2024-11-06 15:41:32.943306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.093 [2024-11-06 15:41:32.943318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.093 [2024-11-06 15:41:32.943484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.093 [2024-11-06 15:41:32.943637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.093 [2024-11-06 15:41:32.943642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.094 [2024-11-06 15:41:32.943648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.094 [2024-11-06 15:41:32.943653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.094 [2024-11-06 15:41:32.955326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.094 [2024-11-06 15:41:32.955831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.094 [2024-11-06 15:41:32.955861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.094 [2024-11-06 15:41:32.955870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.094 [2024-11-06 15:41:32.956038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.094 [2024-11-06 15:41:32.956191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.094 [2024-11-06 15:41:32.956197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.094 [2024-11-06 15:41:32.956202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.094 [2024-11-06 15:41:32.956208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.094 [2024-11-06 15:41:32.968022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.094 [2024-11-06 15:41:32.968588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.094 [2024-11-06 15:41:32.968618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.094 [2024-11-06 15:41:32.968627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.094 [2024-11-06 15:41:32.968799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.094 [2024-11-06 15:41:32.968952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.094 [2024-11-06 15:41:32.968957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.094 [2024-11-06 15:41:32.968963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.094 [2024-11-06 15:41:32.968969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.094 [2024-11-06 15:41:32.980631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.094 [2024-11-06 15:41:32.981116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.094 [2024-11-06 15:41:32.981146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.094 [2024-11-06 15:41:32.981155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.094 [2024-11-06 15:41:32.981320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.094 [2024-11-06 15:41:32.981473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.094 [2024-11-06 15:41:32.981482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.094 [2024-11-06 15:41:32.981487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.094 [2024-11-06 15:41:32.981493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.094 [2024-11-06 15:41:32.993308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.094 [2024-11-06 15:41:32.993964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.094 [2024-11-06 15:41:32.993994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.094 [2024-11-06 15:41:32.994003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.094 [2024-11-06 15:41:32.994168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.094 [2024-11-06 15:41:32.994321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.094 [2024-11-06 15:41:32.994327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.094 [2024-11-06 15:41:32.994332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.094 [2024-11-06 15:41:32.994338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.094 [2024-11-06 15:41:33.006015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.094 [2024-11-06 15:41:33.006591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.094 [2024-11-06 15:41:33.006621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.094 [2024-11-06 15:41:33.006630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.094 [2024-11-06 15:41:33.006802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.094 [2024-11-06 15:41:33.006955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.094 [2024-11-06 15:41:33.006961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.094 [2024-11-06 15:41:33.006967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.094 [2024-11-06 15:41:33.006972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.094 [2024-11-06 15:41:33.018670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.094 [2024-11-06 15:41:33.019186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.094 [2024-11-06 15:41:33.019215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.094 [2024-11-06 15:41:33.019224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.094 [2024-11-06 15:41:33.019392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.094 [2024-11-06 15:41:33.019545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.094 [2024-11-06 15:41:33.019551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.094 [2024-11-06 15:41:33.019556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.094 [2024-11-06 15:41:33.019565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.094 [2024-11-06 15:41:33.031406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.094 [2024-11-06 15:41:33.031869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.094 [2024-11-06 15:41:33.031899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.094 [2024-11-06 15:41:33.031908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.094 [2024-11-06 15:41:33.032076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.094 [2024-11-06 15:41:33.032229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.094 [2024-11-06 15:41:33.032235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.094 [2024-11-06 15:41:33.032240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.094 [2024-11-06 15:41:33.032245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.094 [2024-11-06 15:41:33.044064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.094 [2024-11-06 15:41:33.044637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.094 [2024-11-06 15:41:33.044667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.094 [2024-11-06 15:41:33.044675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.094 [2024-11-06 15:41:33.044848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.094 [2024-11-06 15:41:33.045001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.094 [2024-11-06 15:41:33.045007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.094 [2024-11-06 15:41:33.045012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.094 [2024-11-06 15:41:33.045018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.094 [2024-11-06 15:41:33.056693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.094 [2024-11-06 15:41:33.057269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.094 [2024-11-06 15:41:33.057299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.094 [2024-11-06 15:41:33.057308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.094 [2024-11-06 15:41:33.057474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.094 [2024-11-06 15:41:33.057627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.094 [2024-11-06 15:41:33.057633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.094 [2024-11-06 15:41:33.057638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.094 [2024-11-06 15:41:33.057644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.094 [2024-11-06 15:41:33.069314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.094 [2024-11-06 15:41:33.069807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.094 [2024-11-06 15:41:33.069837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.095 [2024-11-06 15:41:33.069846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.095 [2024-11-06 15:41:33.070014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.095 [2024-11-06 15:41:33.070166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.095 [2024-11-06 15:41:33.070172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.095 [2024-11-06 15:41:33.070177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.095 [2024-11-06 15:41:33.070183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.357 [2024-11-06 15:41:33.082014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.357 [2024-11-06 15:41:33.082579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-06 15:41:33.082608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.357 [2024-11-06 15:41:33.082617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.357 [2024-11-06 15:41:33.082790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.357 [2024-11-06 15:41:33.082943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.357 [2024-11-06 15:41:33.082949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.357 [2024-11-06 15:41:33.082955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.357 [2024-11-06 15:41:33.082961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.357 [2024-11-06 15:41:33.094620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.357 [2024-11-06 15:41:33.095176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-06 15:41:33.095206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.357 [2024-11-06 15:41:33.095215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.357 [2024-11-06 15:41:33.095380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.357 [2024-11-06 15:41:33.095533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.357 [2024-11-06 15:41:33.095539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.357 [2024-11-06 15:41:33.095544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.357 [2024-11-06 15:41:33.095550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.357 [2024-11-06 15:41:33.107224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.357 [2024-11-06 15:41:33.107709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-06 15:41:33.107724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.357 [2024-11-06 15:41:33.107730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.357 [2024-11-06 15:41:33.107888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.357 [2024-11-06 15:41:33.108039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.357 [2024-11-06 15:41:33.108045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.357 [2024-11-06 15:41:33.108050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.357 [2024-11-06 15:41:33.108054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.357 [2024-11-06 15:41:33.119866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.357 [2024-11-06 15:41:33.120356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-06 15:41:33.120369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.357 [2024-11-06 15:41:33.120374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.357 [2024-11-06 15:41:33.120524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.357 [2024-11-06 15:41:33.120673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.357 [2024-11-06 15:41:33.120678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.357 [2024-11-06 15:41:33.120683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.357 [2024-11-06 15:41:33.120688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.357 [2024-11-06 15:41:33.132508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.357 [2024-11-06 15:41:33.133111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-06 15:41:33.133141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.357 [2024-11-06 15:41:33.133150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.357 [2024-11-06 15:41:33.133315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.358 [2024-11-06 15:41:33.133467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.358 [2024-11-06 15:41:33.133473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.358 [2024-11-06 15:41:33.133479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.358 [2024-11-06 15:41:33.133484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.358 [2024-11-06 15:41:33.145153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.358 [2024-11-06 15:41:33.145733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-06 15:41:33.145768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.358 [2024-11-06 15:41:33.145776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.358 [2024-11-06 15:41:33.145942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.358 [2024-11-06 15:41:33.146095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.358 [2024-11-06 15:41:33.146105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.358 [2024-11-06 15:41:33.146111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.358 [2024-11-06 15:41:33.146116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.358 [2024-11-06 15:41:33.157796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.358 [2024-11-06 15:41:33.158366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-06 15:41:33.158396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.358 [2024-11-06 15:41:33.158405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.358 [2024-11-06 15:41:33.158570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.358 [2024-11-06 15:41:33.158723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.358 [2024-11-06 15:41:33.158729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.358 [2024-11-06 15:41:33.158734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.358 [2024-11-06 15:41:33.158740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.358 [2024-11-06 15:41:33.170406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.358 [2024-11-06 15:41:33.171038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-06 15:41:33.171068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.358 [2024-11-06 15:41:33.171076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.358 [2024-11-06 15:41:33.171242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.358 [2024-11-06 15:41:33.171394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.358 [2024-11-06 15:41:33.171400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.358 [2024-11-06 15:41:33.171405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.358 [2024-11-06 15:41:33.171411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.358 [2024-11-06 15:41:33.183083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.358 [2024-11-06 15:41:33.183662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-06 15:41:33.183692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.358 [2024-11-06 15:41:33.183700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.358 [2024-11-06 15:41:33.183873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.358 [2024-11-06 15:41:33.184026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.358 [2024-11-06 15:41:33.184032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.358 [2024-11-06 15:41:33.184037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.358 [2024-11-06 15:41:33.184046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.358 [2024-11-06 15:41:33.195710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.358 [2024-11-06 15:41:33.196277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-06 15:41:33.196307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.358 [2024-11-06 15:41:33.196315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.358 [2024-11-06 15:41:33.196481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.358 [2024-11-06 15:41:33.196633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.358 [2024-11-06 15:41:33.196640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.358 [2024-11-06 15:41:33.196645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.358 [2024-11-06 15:41:33.196650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.358 [2024-11-06 15:41:33.208317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.358 [2024-11-06 15:41:33.208933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-06 15:41:33.208963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.358 [2024-11-06 15:41:33.208972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.358 [2024-11-06 15:41:33.209137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.358 [2024-11-06 15:41:33.209289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.358 [2024-11-06 15:41:33.209295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.358 [2024-11-06 15:41:33.209301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.358 [2024-11-06 15:41:33.209306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.358 [2024-11-06 15:41:33.221023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.358 [2024-11-06 15:41:33.221563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-06 15:41:33.221593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.358 [2024-11-06 15:41:33.221601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.358 [2024-11-06 15:41:33.221774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.358 [2024-11-06 15:41:33.221927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.358 [2024-11-06 15:41:33.221933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.358 [2024-11-06 15:41:33.221938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.358 [2024-11-06 15:41:33.221944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.358 [2024-11-06 15:41:33.233626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.358 [2024-11-06 15:41:33.234179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-06 15:41:33.234213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.358 [2024-11-06 15:41:33.234221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.358 [2024-11-06 15:41:33.234387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.358 [2024-11-06 15:41:33.234540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.358 [2024-11-06 15:41:33.234546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.358 [2024-11-06 15:41:33.234551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.358 [2024-11-06 15:41:33.234557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.358 [2024-11-06 15:41:33.246234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.358 [2024-11-06 15:41:33.246721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-06 15:41:33.246757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.358 [2024-11-06 15:41:33.246765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.358 [2024-11-06 15:41:33.246931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.358 [2024-11-06 15:41:33.247084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.359 [2024-11-06 15:41:33.247090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.359 [2024-11-06 15:41:33.247095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.359 [2024-11-06 15:41:33.247101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.359 [2024-11-06 15:41:33.258909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.359 [2024-11-06 15:41:33.259477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-06 15:41:33.259507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.359 [2024-11-06 15:41:33.259516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.359 [2024-11-06 15:41:33.259682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.359 [2024-11-06 15:41:33.259843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.359 [2024-11-06 15:41:33.259850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.359 [2024-11-06 15:41:33.259856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.359 [2024-11-06 15:41:33.259861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.359 [2024-11-06 15:41:33.271531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.359 [2024-11-06 15:41:33.272127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-06 15:41:33.272157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.359 [2024-11-06 15:41:33.272166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.359 [2024-11-06 15:41:33.272334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.359 [2024-11-06 15:41:33.272487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.359 [2024-11-06 15:41:33.272493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.359 [2024-11-06 15:41:33.272498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.359 [2024-11-06 15:41:33.272504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.359 [2024-11-06 15:41:33.284180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.359 [2024-11-06 15:41:33.284772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-06 15:41:33.284802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.359 [2024-11-06 15:41:33.284811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.359 [2024-11-06 15:41:33.284976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.359 [2024-11-06 15:41:33.285129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.359 [2024-11-06 15:41:33.285135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.359 [2024-11-06 15:41:33.285140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.359 [2024-11-06 15:41:33.285146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.359 [2024-11-06 15:41:33.296823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.359 [2024-11-06 15:41:33.297395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-06 15:41:33.297425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.359 [2024-11-06 15:41:33.297433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.359 [2024-11-06 15:41:33.297599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.359 [2024-11-06 15:41:33.297759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.359 [2024-11-06 15:41:33.297766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.359 [2024-11-06 15:41:33.297771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.359 [2024-11-06 15:41:33.297777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.359 [2024-11-06 15:41:33.309443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.359 [2024-11-06 15:41:33.310040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-06 15:41:33.310070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.359 [2024-11-06 15:41:33.310079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.359 [2024-11-06 15:41:33.310245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.359 [2024-11-06 15:41:33.310398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.359 [2024-11-06 15:41:33.310407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.359 [2024-11-06 15:41:33.310413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.359 [2024-11-06 15:41:33.310418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.359 [2024-11-06 15:41:33.322098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.359 [2024-11-06 15:41:33.322668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-06 15:41:33.322697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.359 [2024-11-06 15:41:33.322706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.359 [2024-11-06 15:41:33.322878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.359 [2024-11-06 15:41:33.323031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.359 [2024-11-06 15:41:33.323037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.359 [2024-11-06 15:41:33.323042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.359 [2024-11-06 15:41:33.323048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.359 5660.80 IOPS, 22.11 MiB/s [2024-11-06T14:41:33.342Z] [2024-11-06 15:41:33.334736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.359 [2024-11-06 15:41:33.335305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-06 15:41:33.335336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.359 [2024-11-06 15:41:33.335344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.359 [2024-11-06 15:41:33.335510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.359 [2024-11-06 15:41:33.335662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.359 [2024-11-06 15:41:33.335668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.359 [2024-11-06 15:41:33.335673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.359 [2024-11-06 15:41:33.335679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.622 [2024-11-06 15:41:33.347392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.622 [2024-11-06 15:41:33.347870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.622 [2024-11-06 15:41:33.347898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.622 [2024-11-06 15:41:33.347907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.622 [2024-11-06 15:41:33.348075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.622 [2024-11-06 15:41:33.348228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.622 [2024-11-06 15:41:33.348234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.622 [2024-11-06 15:41:33.348239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.622 [2024-11-06 15:41:33.348252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.622 [2024-11-06 15:41:33.360096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.622 [2024-11-06 15:41:33.360669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.622 [2024-11-06 15:41:33.360699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.622 [2024-11-06 15:41:33.360708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.622 [2024-11-06 15:41:33.360879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.622 [2024-11-06 15:41:33.361033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.622 [2024-11-06 15:41:33.361039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.622 [2024-11-06 15:41:33.361045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.622 [2024-11-06 15:41:33.361050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.622 [2024-11-06 15:41:33.372722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.622 [2024-11-06 15:41:33.373290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.622 [2024-11-06 15:41:33.373319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.622 [2024-11-06 15:41:33.373328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.622 [2024-11-06 15:41:33.373494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.622 [2024-11-06 15:41:33.373646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.622 [2024-11-06 15:41:33.373653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.622 [2024-11-06 15:41:33.373658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.622 [2024-11-06 15:41:33.373663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.622 [2024-11-06 15:41:33.385341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.622 [2024-11-06 15:41:33.385847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.622 [2024-11-06 15:41:33.385878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.622 [2024-11-06 15:41:33.385886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.622 [2024-11-06 15:41:33.386054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.622 [2024-11-06 15:41:33.386207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.622 [2024-11-06 15:41:33.386213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.622 [2024-11-06 15:41:33.386219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.622 [2024-11-06 15:41:33.386224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.622 [2024-11-06 15:41:33.398048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.622 [2024-11-06 15:41:33.398537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.622 [2024-11-06 15:41:33.398556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.622 [2024-11-06 15:41:33.398562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.622 [2024-11-06 15:41:33.398712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.622 [2024-11-06 15:41:33.398868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.622 [2024-11-06 15:41:33.398874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.622 [2024-11-06 15:41:33.398879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.622 [2024-11-06 15:41:33.398884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.622 [2024-11-06 15:41:33.410690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.622 [2024-11-06 15:41:33.411149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.622 [2024-11-06 15:41:33.411162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.622 [2024-11-06 15:41:33.411168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.622 [2024-11-06 15:41:33.411318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.622 [2024-11-06 15:41:33.411468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.622 [2024-11-06 15:41:33.411473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.622 [2024-11-06 15:41:33.411478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.622 [2024-11-06 15:41:33.411483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.622 [2024-11-06 15:41:33.423287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.622 [2024-11-06 15:41:33.423625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.622 [2024-11-06 15:41:33.423638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.622 [2024-11-06 15:41:33.423643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.622 [2024-11-06 15:41:33.423796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.622 [2024-11-06 15:41:33.423946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.622 [2024-11-06 15:41:33.423951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.622 [2024-11-06 15:41:33.423957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.622 [2024-11-06 15:41:33.423961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.622 [2024-11-06 15:41:33.435952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.622 [2024-11-06 15:41:33.436505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.622 [2024-11-06 15:41:33.436535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.622 [2024-11-06 15:41:33.436544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.622 [2024-11-06 15:41:33.436713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.622 [2024-11-06 15:41:33.436872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.622 [2024-11-06 15:41:33.436879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.622 [2024-11-06 15:41:33.436884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.622 [2024-11-06 15:41:33.436890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.622 [2024-11-06 15:41:33.448570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.622 [2024-11-06 15:41:33.449159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.622 [2024-11-06 15:41:33.449188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.622 [2024-11-06 15:41:33.449197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.623 [2024-11-06 15:41:33.449365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.623 [2024-11-06 15:41:33.449517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.623 [2024-11-06 15:41:33.449523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.623 [2024-11-06 15:41:33.449529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.623 [2024-11-06 15:41:33.449535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.623 [2024-11-06 15:41:33.461258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.623 [2024-11-06 15:41:33.461720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.623 [2024-11-06 15:41:33.461735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.623 [2024-11-06 15:41:33.461741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.623 [2024-11-06 15:41:33.461897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.623 [2024-11-06 15:41:33.462048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.623 [2024-11-06 15:41:33.462053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.623 [2024-11-06 15:41:33.462058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.623 [2024-11-06 15:41:33.462063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.623 [2024-11-06 15:41:33.473867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.623 [2024-11-06 15:41:33.474436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.623 [2024-11-06 15:41:33.474465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.623 [2024-11-06 15:41:33.474474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.623 [2024-11-06 15:41:33.474640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.623 [2024-11-06 15:41:33.474799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.623 [2024-11-06 15:41:33.474809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.623 [2024-11-06 15:41:33.474815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.623 [2024-11-06 15:41:33.474820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.623 [2024-11-06 15:41:33.486480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.623 [2024-11-06 15:41:33.487002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.623 [2024-11-06 15:41:33.487032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.623 [2024-11-06 15:41:33.487041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.623 [2024-11-06 15:41:33.487209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.623 [2024-11-06 15:41:33.487362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.623 [2024-11-06 15:41:33.487368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.623 [2024-11-06 15:41:33.487374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.623 [2024-11-06 15:41:33.487379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.623 [2024-11-06 15:41:33.499203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.623 [2024-11-06 15:41:33.499799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.623 [2024-11-06 15:41:33.499829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.623 [2024-11-06 15:41:33.499838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.623 [2024-11-06 15:41:33.500006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.623 [2024-11-06 15:41:33.500159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.623 [2024-11-06 15:41:33.500164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.623 [2024-11-06 15:41:33.500170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.623 [2024-11-06 15:41:33.500176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.623 [2024-11-06 15:41:33.511853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.623 [2024-11-06 15:41:33.512422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.623 [2024-11-06 15:41:33.512452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.623 [2024-11-06 15:41:33.512461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.623 [2024-11-06 15:41:33.512627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.623 [2024-11-06 15:41:33.512786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.623 [2024-11-06 15:41:33.512793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.623 [2024-11-06 15:41:33.512798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.623 [2024-11-06 15:41:33.512804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.623 [2024-11-06 15:41:33.524478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.623 [2024-11-06 15:41:33.524962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.623 [2024-11-06 15:41:33.524977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.623 [2024-11-06 15:41:33.524983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.623 [2024-11-06 15:41:33.525133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.623 [2024-11-06 15:41:33.525283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.623 [2024-11-06 15:41:33.525288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.623 [2024-11-06 15:41:33.525293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.623 [2024-11-06 15:41:33.525298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.623 [2024-11-06 15:41:33.537122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.623 [2024-11-06 15:41:33.537691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.623 [2024-11-06 15:41:33.537721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.623 [2024-11-06 15:41:33.537730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.623 [2024-11-06 15:41:33.537905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.623 [2024-11-06 15:41:33.538058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.623 [2024-11-06 15:41:33.538064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.623 [2024-11-06 15:41:33.538070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.623 [2024-11-06 15:41:33.538075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.623 [2024-11-06 15:41:33.549743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.623 [2024-11-06 15:41:33.550334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.623 [2024-11-06 15:41:33.550364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.623 [2024-11-06 15:41:33.550373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.623 [2024-11-06 15:41:33.550539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.623 [2024-11-06 15:41:33.550692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.623 [2024-11-06 15:41:33.550698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.623 [2024-11-06 15:41:33.550703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.623 [2024-11-06 15:41:33.550709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.623 [2024-11-06 15:41:33.562384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.623 [2024-11-06 15:41:33.563095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.623 [2024-11-06 15:41:33.563128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.623 [2024-11-06 15:41:33.563137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.623 [2024-11-06 15:41:33.563302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.623 [2024-11-06 15:41:33.563455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.623 [2024-11-06 15:41:33.563460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.623 [2024-11-06 15:41:33.563466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.624 [2024-11-06 15:41:33.563471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.624 [2024-11-06 15:41:33.575011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.624 [2024-11-06 15:41:33.575394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.624 [2024-11-06 15:41:33.575409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.624 [2024-11-06 15:41:33.575414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.624 [2024-11-06 15:41:33.575564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.624 [2024-11-06 15:41:33.575714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.624 [2024-11-06 15:41:33.575719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.624 [2024-11-06 15:41:33.575725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.624 [2024-11-06 15:41:33.575730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.624 [2024-11-06 15:41:33.587681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.624 [2024-11-06 15:41:33.588247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.624 [2024-11-06 15:41:33.588260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.624 [2024-11-06 15:41:33.588265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.624 [2024-11-06 15:41:33.588414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.624 [2024-11-06 15:41:33.588564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.624 [2024-11-06 15:41:33.588569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.624 [2024-11-06 15:41:33.588574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.624 [2024-11-06 15:41:33.588579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.624 [2024-11-06 15:41:33.600386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.624 [2024-11-06 15:41:33.600818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.624 [2024-11-06 15:41:33.600830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.624 [2024-11-06 15:41:33.600836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.624 [2024-11-06 15:41:33.600989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.624 [2024-11-06 15:41:33.601140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.624 [2024-11-06 15:41:33.601146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.624 [2024-11-06 15:41:33.601151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.624 [2024-11-06 15:41:33.601156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.887 [2024-11-06 15:41:33.613110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.887 [2024-11-06 15:41:33.613503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.887 [2024-11-06 15:41:33.613516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.887 [2024-11-06 15:41:33.613521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.887 [2024-11-06 15:41:33.613670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.887 [2024-11-06 15:41:33.613825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.887 [2024-11-06 15:41:33.613831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.887 [2024-11-06 15:41:33.613836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.887 [2024-11-06 15:41:33.613841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.887 [2024-11-06 15:41:33.625790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.887 [2024-11-06 15:41:33.626226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.887 [2024-11-06 15:41:33.626255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.887 [2024-11-06 15:41:33.626264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.887 [2024-11-06 15:41:33.626432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.887 [2024-11-06 15:41:33.626584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.887 [2024-11-06 15:41:33.626590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.887 [2024-11-06 15:41:33.626597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.887 [2024-11-06 15:41:33.626602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.887 [2024-11-06 15:41:33.638471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.887 [2024-11-06 15:41:33.639149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.887 [2024-11-06 15:41:33.639179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.887 [2024-11-06 15:41:33.639188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.887 [2024-11-06 15:41:33.639353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.887 [2024-11-06 15:41:33.639506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.887 [2024-11-06 15:41:33.639512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.887 [2024-11-06 15:41:33.639521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.887 [2024-11-06 15:41:33.639526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.887 [2024-11-06 15:41:33.651073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.887 [2024-11-06 15:41:33.651569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.887 [2024-11-06 15:41:33.651584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.887 [2024-11-06 15:41:33.651590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.887 [2024-11-06 15:41:33.651740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.887 [2024-11-06 15:41:33.651896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.887 [2024-11-06 15:41:33.651902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.887 [2024-11-06 15:41:33.651907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.887 [2024-11-06 15:41:33.651912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.887 [2024-11-06 15:41:33.663716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.887 [2024-11-06 15:41:33.664320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.887 [2024-11-06 15:41:33.664350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.887 [2024-11-06 15:41:33.664358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.887 [2024-11-06 15:41:33.664524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.887 [2024-11-06 15:41:33.664677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.887 [2024-11-06 15:41:33.664683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.887 [2024-11-06 15:41:33.664688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.887 [2024-11-06 15:41:33.664694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.887 [2024-11-06 15:41:33.676368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.887 [2024-11-06 15:41:33.677016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.887 [2024-11-06 15:41:33.677046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.887 [2024-11-06 15:41:33.677055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.887 [2024-11-06 15:41:33.677220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.887 [2024-11-06 15:41:33.677373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.888 [2024-11-06 15:41:33.677379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.888 [2024-11-06 15:41:33.677385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.888 [2024-11-06 15:41:33.677390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.888 [2024-11-06 15:41:33.689063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.888 [2024-11-06 15:41:33.689643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.888 [2024-11-06 15:41:33.689672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.888 [2024-11-06 15:41:33.689681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.888 [2024-11-06 15:41:33.689852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.888 [2024-11-06 15:41:33.690005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.888 [2024-11-06 15:41:33.690011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.888 [2024-11-06 15:41:33.690016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.888 [2024-11-06 15:41:33.690022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.888 [2024-11-06 15:41:33.701693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.888 [2024-11-06 15:41:33.702290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.888 [2024-11-06 15:41:33.702320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.888 [2024-11-06 15:41:33.702329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.888 [2024-11-06 15:41:33.702497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.888 [2024-11-06 15:41:33.702650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.888 [2024-11-06 15:41:33.702656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.888 [2024-11-06 15:41:33.702661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.888 [2024-11-06 15:41:33.702667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.888 [2024-11-06 15:41:33.714342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.888 [2024-11-06 15:41:33.714854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.888 [2024-11-06 15:41:33.714884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.888 [2024-11-06 15:41:33.714893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.888 [2024-11-06 15:41:33.715061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.888 [2024-11-06 15:41:33.715214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.888 [2024-11-06 15:41:33.715220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.888 [2024-11-06 15:41:33.715225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.888 [2024-11-06 15:41:33.715231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.888 [2024-11-06 15:41:33.727093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.888 [2024-11-06 15:41:33.727579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.888 [2024-11-06 15:41:33.727594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.888 [2024-11-06 15:41:33.727603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.888 [2024-11-06 15:41:33.727758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.888 [2024-11-06 15:41:33.727909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.888 [2024-11-06 15:41:33.727915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.888 [2024-11-06 15:41:33.727920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.888 [2024-11-06 15:41:33.727925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.888 [2024-11-06 15:41:33.739750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.888 [2024-11-06 15:41:33.740197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.888 [2024-11-06 15:41:33.740210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.888 [2024-11-06 15:41:33.740215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.888 [2024-11-06 15:41:33.740365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.888 [2024-11-06 15:41:33.740515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.888 [2024-11-06 15:41:33.740520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.888 [2024-11-06 15:41:33.740525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.888 [2024-11-06 15:41:33.740530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.888 [2024-11-06 15:41:33.752340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.888 [2024-11-06 15:41:33.752784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.888 [2024-11-06 15:41:33.752797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.888 [2024-11-06 15:41:33.752802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.888 [2024-11-06 15:41:33.752951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.888 [2024-11-06 15:41:33.753101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.888 [2024-11-06 15:41:33.753106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.888 [2024-11-06 15:41:33.753111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.888 [2024-11-06 15:41:33.753116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.888 [2024-11-06 15:41:33.765063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.888 [2024-11-06 15:41:33.765577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.888 [2024-11-06 15:41:33.765607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.888 [2024-11-06 15:41:33.765616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.888 [2024-11-06 15:41:33.765787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.888 [2024-11-06 15:41:33.765944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.888 [2024-11-06 15:41:33.765950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.888 [2024-11-06 15:41:33.765955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.888 [2024-11-06 15:41:33.765961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.888 [2024-11-06 15:41:33.777786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.888 [2024-11-06 15:41:33.778334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.888 [2024-11-06 15:41:33.778364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.888 [2024-11-06 15:41:33.778372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.888 [2024-11-06 15:41:33.778537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.888 [2024-11-06 15:41:33.778691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.888 [2024-11-06 15:41:33.778696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.888 [2024-11-06 15:41:33.778702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.888 [2024-11-06 15:41:33.778707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.888 [2024-11-06 15:41:33.790378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.888 [2024-11-06 15:41:33.790887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.888 [2024-11-06 15:41:33.790917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.888 [2024-11-06 15:41:33.790926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.888 [2024-11-06 15:41:33.791094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.888 [2024-11-06 15:41:33.791247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.888 [2024-11-06 15:41:33.791253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.888 [2024-11-06 15:41:33.791258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.888 [2024-11-06 15:41:33.791263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.889 [2024-11-06 15:41:33.803097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.889 [2024-11-06 15:41:33.803544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.889 [2024-11-06 15:41:33.803573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.889 [2024-11-06 15:41:33.803582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.889 [2024-11-06 15:41:33.803756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.889 [2024-11-06 15:41:33.803910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.889 [2024-11-06 15:41:33.803916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.889 [2024-11-06 15:41:33.803924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.889 [2024-11-06 15:41:33.803930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.889 [2024-11-06 15:41:33.815738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.889 [2024-11-06 15:41:33.816305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.889 [2024-11-06 15:41:33.816335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.889 [2024-11-06 15:41:33.816344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.889 [2024-11-06 15:41:33.816510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.889 [2024-11-06 15:41:33.816663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.889 [2024-11-06 15:41:33.816669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.889 [2024-11-06 15:41:33.816674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.889 [2024-11-06 15:41:33.816680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.889 [2024-11-06 15:41:33.828355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.889 [2024-11-06 15:41:33.828997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.889 [2024-11-06 15:41:33.829027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.889 [2024-11-06 15:41:33.829035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.889 [2024-11-06 15:41:33.829202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.889 [2024-11-06 15:41:33.829355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.889 [2024-11-06 15:41:33.829361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.889 [2024-11-06 15:41:33.829366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.889 [2024-11-06 15:41:33.829372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.889 [2024-11-06 15:41:33.841065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.889 [2024-11-06 15:41:33.841515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.889 [2024-11-06 15:41:33.841530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.889 [2024-11-06 15:41:33.841536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.889 [2024-11-06 15:41:33.841687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.889 [2024-11-06 15:41:33.841842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.889 [2024-11-06 15:41:33.841848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.889 [2024-11-06 15:41:33.841853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.889 [2024-11-06 15:41:33.841858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.889 [2024-11-06 15:41:33.853693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.889 [2024-11-06 15:41:33.854158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.889 [2024-11-06 15:41:33.854172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.889 [2024-11-06 15:41:33.854178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.889 [2024-11-06 15:41:33.854327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.889 [2024-11-06 15:41:33.854477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.889 [2024-11-06 15:41:33.854483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.889 [2024-11-06 15:41:33.854488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.889 [2024-11-06 15:41:33.854492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.889 [2024-11-06 15:41:33.866299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.889 [2024-11-06 15:41:33.866834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.889 [2024-11-06 15:41:33.866867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:15.889 [2024-11-06 15:41:33.866875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:15.889 [2024-11-06 15:41:33.867043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:15.889 [2024-11-06 15:41:33.867196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.889 [2024-11-06 15:41:33.867202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.889 [2024-11-06 15:41:33.867208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.889 [2024-11-06 15:41:33.867213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3959758 Killed "${NVMF_APP[@]}" "$@" 00:29:16.151 15:41:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:16.151 15:41:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:16.151 15:41:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:16.151 15:41:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:16.151 15:41:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:16.151 15:41:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3961359 00:29:16.151 15:41:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3961359 00:29:16.152 15:41:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:16.152 15:41:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3961359 ']' 00:29:16.152 15:41:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.152 [2024-11-06 15:41:33.878903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.152 15:41:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:16.152 15:41:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.152 15:41:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:16.152 [2024-11-06 15:41:33.879472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.152 [2024-11-06 15:41:33.879503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.152 [2024-11-06 15:41:33.879511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.152 [2024-11-06 15:41:33.879677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.152 15:41:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:16.152 [2024-11-06 15:41:33.879837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.152 [2024-11-06 15:41:33.879845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.152 [2024-11-06 15:41:33.879851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.152 [2024-11-06 15:41:33.879857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.152 [2024-11-06 15:41:33.891537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.152 [2024-11-06 15:41:33.892033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.152 [2024-11-06 15:41:33.892049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.152 [2024-11-06 15:41:33.892055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.152 [2024-11-06 15:41:33.892205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.152 [2024-11-06 15:41:33.892356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.152 [2024-11-06 15:41:33.892363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.152 [2024-11-06 15:41:33.892369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.152 [2024-11-06 15:41:33.892375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.152 [2024-11-06 15:41:33.904188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.152 [2024-11-06 15:41:33.904695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.152 [2024-11-06 15:41:33.904725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.152 [2024-11-06 15:41:33.904734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.152 [2024-11-06 15:41:33.904909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.152 [2024-11-06 15:41:33.905063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.152 [2024-11-06 15:41:33.905069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.152 [2024-11-06 15:41:33.905074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.152 [2024-11-06 15:41:33.905080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.152 [2024-11-06 15:41:33.916808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.152 [2024-11-06 15:41:33.917284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.152 [2024-11-06 15:41:33.917302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.152 [2024-11-06 15:41:33.917308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.152 [2024-11-06 15:41:33.917459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.152 [2024-11-06 15:41:33.917609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.152 [2024-11-06 15:41:33.917614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.152 [2024-11-06 15:41:33.917620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.152 [2024-11-06 15:41:33.917624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.152 [2024-11-06 15:41:33.929438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.152 [2024-11-06 15:41:33.929832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.152 [2024-11-06 15:41:33.929845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.152 [2024-11-06 15:41:33.929850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.152 [2024-11-06 15:41:33.930000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.152 [2024-11-06 15:41:33.930150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.152 [2024-11-06 15:41:33.930155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.152 [2024-11-06 15:41:33.930160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.152 [2024-11-06 15:41:33.930165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.152 [2024-11-06 15:41:33.931229] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:29:16.152 [2024-11-06 15:41:33.931282] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.152 [2024-11-06 15:41:33.942140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.152 [2024-11-06 15:41:33.942481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.152 [2024-11-06 15:41:33.942494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.152 [2024-11-06 15:41:33.942500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.152 [2024-11-06 15:41:33.942650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.152 [2024-11-06 15:41:33.942805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.152 [2024-11-06 15:41:33.942811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.152 [2024-11-06 15:41:33.942816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.152 [2024-11-06 15:41:33.942821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.152 [2024-11-06 15:41:33.954791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.152 [2024-11-06 15:41:33.955342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.152 [2024-11-06 15:41:33.955372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.152 [2024-11-06 15:41:33.955381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.152 [2024-11-06 15:41:33.955547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.152 [2024-11-06 15:41:33.955700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.152 [2024-11-06 15:41:33.955706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.152 [2024-11-06 15:41:33.955712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.152 [2024-11-06 15:41:33.955717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.152 [2024-11-06 15:41:33.967402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.152 [2024-11-06 15:41:33.967962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.152 [2024-11-06 15:41:33.967978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.152 [2024-11-06 15:41:33.967984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.152 [2024-11-06 15:41:33.968135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.152 [2024-11-06 15:41:33.968285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.152 [2024-11-06 15:41:33.968291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.152 [2024-11-06 15:41:33.968296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.152 [2024-11-06 15:41:33.968301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.152 [2024-11-06 15:41:33.980059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.152 [2024-11-06 15:41:33.980546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.152 [2024-11-06 15:41:33.980560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.152 [2024-11-06 15:41:33.980565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.152 [2024-11-06 15:41:33.980715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.152 [2024-11-06 15:41:33.980870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.152 [2024-11-06 15:41:33.980876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.152 [2024-11-06 15:41:33.980881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.152 [2024-11-06 15:41:33.980885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.152 [2024-11-06 15:41:33.992694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.152 [2024-11-06 15:41:33.993161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.152 [2024-11-06 15:41:33.993176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.152 [2024-11-06 15:41:33.993181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.152 [2024-11-06 15:41:33.993338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.152 [2024-11-06 15:41:33.993488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.152 [2024-11-06 15:41:33.993494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.152 [2024-11-06 15:41:33.993499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.152 [2024-11-06 15:41:33.993503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.152 [2024-11-06 15:41:34.005320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.152 [2024-11-06 15:41:34.005786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.152 [2024-11-06 15:41:34.005800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.152 [2024-11-06 15:41:34.005806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.152 [2024-11-06 15:41:34.005956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.152 [2024-11-06 15:41:34.006106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.152 [2024-11-06 15:41:34.006112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.152 [2024-11-06 15:41:34.006116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.152 [2024-11-06 15:41:34.006121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.152 [2024-11-06 15:41:34.017933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.152 [2024-11-06 15:41:34.018471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.152 [2024-11-06 15:41:34.018501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.152 [2024-11-06 15:41:34.018509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.152 [2024-11-06 15:41:34.018675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.152 [2024-11-06 15:41:34.018835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.152 [2024-11-06 15:41:34.018842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.152 [2024-11-06 15:41:34.018848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.152 [2024-11-06 15:41:34.018853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.152 [2024-11-06 15:41:34.021622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:16.152 [2024-11-06 15:41:34.030533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.152 [2024-11-06 15:41:34.031118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.152 [2024-11-06 15:41:34.031149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.152 [2024-11-06 15:41:34.031158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.152 [2024-11-06 15:41:34.031324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.152 [2024-11-06 15:41:34.031480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.152 [2024-11-06 15:41:34.031487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.152 [2024-11-06 15:41:34.031492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.152 [2024-11-06 15:41:34.031498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.152 [2024-11-06 15:41:34.043211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.152 [2024-11-06 15:41:34.043705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.152 [2024-11-06 15:41:34.043720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.152 [2024-11-06 15:41:34.043726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.152 [2024-11-06 15:41:34.043883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.152 [2024-11-06 15:41:34.044033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.152 [2024-11-06 15:41:34.044039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.152 [2024-11-06 15:41:34.044044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.152 [2024-11-06 15:41:34.044049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.152 [2024-11-06 15:41:34.050917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:16.152 [2024-11-06 15:41:34.050941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:16.152 [2024-11-06 15:41:34.050947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:16.152 [2024-11-06 15:41:34.050953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:16.152 [2024-11-06 15:41:34.050958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:16.152 [2024-11-06 15:41:34.052246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:16.152 [2024-11-06 15:41:34.052403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.152 [2024-11-06 15:41:34.052406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:16.152 [2024-11-06 15:41:34.055920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.152 [2024-11-06 15:41:34.056515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.152 [2024-11-06 15:41:34.056546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.152 [2024-11-06 15:41:34.056555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.152 [2024-11-06 15:41:34.056722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.152 [2024-11-06 15:41:34.056884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.152 [2024-11-06 15:41:34.056891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.152 [2024-11-06 15:41:34.056897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.152 [2024-11-06 15:41:34.056903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.152 [2024-11-06 15:41:34.068606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.152 [2024-11-06 15:41:34.069117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.152 [2024-11-06 15:41:34.069132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.152 [2024-11-06 15:41:34.069138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.152 [2024-11-06 15:41:34.069289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.152 [2024-11-06 15:41:34.069440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.152 [2024-11-06 15:41:34.069445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.152 [2024-11-06 15:41:34.069450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.152 [2024-11-06 15:41:34.069456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.152 [2024-11-06 15:41:34.081285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.152 [2024-11-06 15:41:34.081776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.152 [2024-11-06 15:41:34.081790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.152 [2024-11-06 15:41:34.081796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.153 [2024-11-06 15:41:34.081947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.153 [2024-11-06 15:41:34.082098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.153 [2024-11-06 15:41:34.082103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.153 [2024-11-06 15:41:34.082109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.153 [2024-11-06 15:41:34.082115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.153 [2024-11-06 15:41:34.093943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.153 [2024-11-06 15:41:34.094510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.153 [2024-11-06 15:41:34.094544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.153 [2024-11-06 15:41:34.094552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.153 [2024-11-06 15:41:34.094723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.153 [2024-11-06 15:41:34.094883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.153 [2024-11-06 15:41:34.094890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.153 [2024-11-06 15:41:34.094896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.153 [2024-11-06 15:41:34.094902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.153 [2024-11-06 15:41:34.106572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.153 [2024-11-06 15:41:34.107031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.153 [2024-11-06 15:41:34.107061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.153 [2024-11-06 15:41:34.107071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.153 [2024-11-06 15:41:34.107243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.153 [2024-11-06 15:41:34.107396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.153 [2024-11-06 15:41:34.107402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.153 [2024-11-06 15:41:34.107408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.153 [2024-11-06 15:41:34.107414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.153 [2024-11-06 15:41:34.119238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.153 [2024-11-06 15:41:34.119481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.153 [2024-11-06 15:41:34.119501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.153 [2024-11-06 15:41:34.119507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.153 [2024-11-06 15:41:34.119663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.153 [2024-11-06 15:41:34.119819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.153 [2024-11-06 15:41:34.119825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.153 [2024-11-06 15:41:34.119830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.153 [2024-11-06 15:41:34.119835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.415 [2024-11-06 15:41:34.131938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.415 [2024-11-06 15:41:34.132499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-11-06 15:41:34.132528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.415 [2024-11-06 15:41:34.132537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.415 [2024-11-06 15:41:34.132703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.415 [2024-11-06 15:41:34.132862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.415 [2024-11-06 15:41:34.132868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.415 [2024-11-06 15:41:34.132875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.415 [2024-11-06 15:41:34.132880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.415 [2024-11-06 15:41:34.144549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.415 [2024-11-06 15:41:34.145183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-11-06 15:41:34.145213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.415 [2024-11-06 15:41:34.145221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.415 [2024-11-06 15:41:34.145388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.415 [2024-11-06 15:41:34.145541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.415 [2024-11-06 15:41:34.145551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.415 [2024-11-06 15:41:34.145556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.415 [2024-11-06 15:41:34.145563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.415 [2024-11-06 15:41:34.157236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.415 [2024-11-06 15:41:34.157723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-11-06 15:41:34.157738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.415 [2024-11-06 15:41:34.157744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.415 [2024-11-06 15:41:34.157898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.415 [2024-11-06 15:41:34.158048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.415 [2024-11-06 15:41:34.158054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.415 [2024-11-06 15:41:34.158059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.415 [2024-11-06 15:41:34.158064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.415 [2024-11-06 15:41:34.169873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.415 [2024-11-06 15:41:34.170431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.415 [2024-11-06 15:41:34.170462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.415 [2024-11-06 15:41:34.170470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.415 [2024-11-06 15:41:34.170636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.415 [2024-11-06 15:41:34.170796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.416 [2024-11-06 15:41:34.170803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.416 [2024-11-06 15:41:34.170809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.416 [2024-11-06 15:41:34.170816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.416 [2024-11-06 15:41:34.182493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.416 [2024-11-06 15:41:34.182976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-11-06 15:41:34.182991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.416 [2024-11-06 15:41:34.182997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.416 [2024-11-06 15:41:34.183147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.416 [2024-11-06 15:41:34.183297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.416 [2024-11-06 15:41:34.183303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.416 [2024-11-06 15:41:34.183307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.416 [2024-11-06 15:41:34.183316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.416 [2024-11-06 15:41:34.195123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.416 [2024-11-06 15:41:34.195710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-11-06 15:41:34.195740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.416 [2024-11-06 15:41:34.195755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.416 [2024-11-06 15:41:34.195923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.416 [2024-11-06 15:41:34.196076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.416 [2024-11-06 15:41:34.196082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.416 [2024-11-06 15:41:34.196089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.416 [2024-11-06 15:41:34.196094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.416 [2024-11-06 15:41:34.207759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.416 [2024-11-06 15:41:34.208339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-11-06 15:41:34.208369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.416 [2024-11-06 15:41:34.208378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.416 [2024-11-06 15:41:34.208544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.416 [2024-11-06 15:41:34.208697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.416 [2024-11-06 15:41:34.208703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.416 [2024-11-06 15:41:34.208708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.416 [2024-11-06 15:41:34.208714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.416 [2024-11-06 15:41:34.220382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.416 [2024-11-06 15:41:34.221039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-11-06 15:41:34.221070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.416 [2024-11-06 15:41:34.221078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.416 [2024-11-06 15:41:34.221244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.416 [2024-11-06 15:41:34.221396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.416 [2024-11-06 15:41:34.221402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.416 [2024-11-06 15:41:34.221407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.416 [2024-11-06 15:41:34.221413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.416 [2024-11-06 15:41:34.233098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.416 [2024-11-06 15:41:34.233589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-11-06 15:41:34.233607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.416 [2024-11-06 15:41:34.233612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.416 [2024-11-06 15:41:34.233766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.416 [2024-11-06 15:41:34.233917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.416 [2024-11-06 15:41:34.233922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.416 [2024-11-06 15:41:34.233927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.416 [2024-11-06 15:41:34.233932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.416 [2024-11-06 15:41:34.245732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.416 [2024-11-06 15:41:34.246176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-11-06 15:41:34.246189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.416 [2024-11-06 15:41:34.246194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.416 [2024-11-06 15:41:34.246344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.416 [2024-11-06 15:41:34.246493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.416 [2024-11-06 15:41:34.246499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.416 [2024-11-06 15:41:34.246504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.416 [2024-11-06 15:41:34.246508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.416 [2024-11-06 15:41:34.258450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.416 [2024-11-06 15:41:34.258915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-11-06 15:41:34.258927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.416 [2024-11-06 15:41:34.258933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.416 [2024-11-06 15:41:34.259082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.416 [2024-11-06 15:41:34.259232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.416 [2024-11-06 15:41:34.259237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.416 [2024-11-06 15:41:34.259242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.416 [2024-11-06 15:41:34.259247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.416 [2024-11-06 15:41:34.271100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.416 [2024-11-06 15:41:34.271485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-11-06 15:41:34.271498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.416 [2024-11-06 15:41:34.271503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.416 [2024-11-06 15:41:34.271656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.416 [2024-11-06 15:41:34.271810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.416 [2024-11-06 15:41:34.271816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.416 [2024-11-06 15:41:34.271821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.416 [2024-11-06 15:41:34.271826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.416 [2024-11-06 15:41:34.283766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.416 [2024-11-06 15:41:34.284318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-11-06 15:41:34.284348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.416 [2024-11-06 15:41:34.284357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.416 [2024-11-06 15:41:34.284523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.416 [2024-11-06 15:41:34.284676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.416 [2024-11-06 15:41:34.284682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.416 [2024-11-06 15:41:34.284687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.416 [2024-11-06 15:41:34.284693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.416 [2024-11-06 15:41:34.296368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.416 [2024-11-06 15:41:34.296885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.416 [2024-11-06 15:41:34.296916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.416 [2024-11-06 15:41:34.296924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.416 [2024-11-06 15:41:34.297092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.416 [2024-11-06 15:41:34.297245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.416 [2024-11-06 15:41:34.297251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.417 [2024-11-06 15:41:34.297257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.417 [2024-11-06 15:41:34.297262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.417 [2024-11-06 15:41:34.309079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.417 [2024-11-06 15:41:34.309567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-11-06 15:41:34.309582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.417 [2024-11-06 15:41:34.309587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.417 [2024-11-06 15:41:34.309738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.417 [2024-11-06 15:41:34.309892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.417 [2024-11-06 15:41:34.309902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.417 [2024-11-06 15:41:34.309908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.417 [2024-11-06 15:41:34.309912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.417 [2024-11-06 15:41:34.321720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.417 [2024-11-06 15:41:34.322323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-11-06 15:41:34.322353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.417 [2024-11-06 15:41:34.322362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.417 [2024-11-06 15:41:34.322528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.417 [2024-11-06 15:41:34.322681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.417 [2024-11-06 15:41:34.322687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.417 [2024-11-06 15:41:34.322692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.417 [2024-11-06 15:41:34.322698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.417 4717.33 IOPS, 18.43 MiB/s [2024-11-06T14:41:34.400Z] [2024-11-06 15:41:34.334393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.417 [2024-11-06 15:41:34.334718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-11-06 15:41:34.334733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.417 [2024-11-06 15:41:34.334738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.417 [2024-11-06 15:41:34.334893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.417 [2024-11-06 15:41:34.335043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.417 [2024-11-06 15:41:34.335049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.417 [2024-11-06 15:41:34.335054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.417 [2024-11-06 15:41:34.335059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.417 [2024-11-06 15:41:34.347005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.417 [2024-11-06 15:41:34.347341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-11-06 15:41:34.347354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.417 [2024-11-06 15:41:34.347359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.417 [2024-11-06 15:41:34.347509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.417 [2024-11-06 15:41:34.347659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.417 [2024-11-06 15:41:34.347665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.417 [2024-11-06 15:41:34.347670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.417 [2024-11-06 15:41:34.347679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.417 [2024-11-06 15:41:34.359624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.417 [2024-11-06 15:41:34.359865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-11-06 15:41:34.359877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.417 [2024-11-06 15:41:34.359882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.417 [2024-11-06 15:41:34.360032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.417 [2024-11-06 15:41:34.360181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.417 [2024-11-06 15:41:34.360187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.417 [2024-11-06 15:41:34.360192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.417 [2024-11-06 15:41:34.360196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.417 [2024-11-06 15:41:34.372330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.417 [2024-11-06 15:41:34.372948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-11-06 15:41:34.372978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.417 [2024-11-06 15:41:34.372987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.417 [2024-11-06 15:41:34.373153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.417 [2024-11-06 15:41:34.373305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.417 [2024-11-06 15:41:34.373311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.417 [2024-11-06 15:41:34.373316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.417 [2024-11-06 15:41:34.373322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.417 [2024-11-06 15:41:34.384998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.417 [2024-11-06 15:41:34.385574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.417 [2024-11-06 15:41:34.385604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.417 [2024-11-06 15:41:34.385612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.417 [2024-11-06 15:41:34.385784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.417 [2024-11-06 15:41:34.385938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.417 [2024-11-06 15:41:34.385944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.417 [2024-11-06 15:41:34.385949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.417 [2024-11-06 15:41:34.385954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.681 [2024-11-06 15:41:34.397630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.681 [2024-11-06 15:41:34.398192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.681 [2024-11-06 15:41:34.398226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.681 [2024-11-06 15:41:34.398235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.681 [2024-11-06 15:41:34.398400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.681 [2024-11-06 15:41:34.398553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.681 [2024-11-06 15:41:34.398559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.681 [2024-11-06 15:41:34.398564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.681 [2024-11-06 15:41:34.398569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.681 [2024-11-06 15:41:34.410258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.681 [2024-11-06 15:41:34.410758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.681 [2024-11-06 15:41:34.410774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.681 [2024-11-06 15:41:34.410780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.681 [2024-11-06 15:41:34.410930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.681 [2024-11-06 15:41:34.411080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.681 [2024-11-06 15:41:34.411086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.681 [2024-11-06 15:41:34.411091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.681 [2024-11-06 15:41:34.411096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.681 [2024-11-06 15:41:34.422902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.681 [2024-11-06 15:41:34.423396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.681 [2024-11-06 15:41:34.423408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.681 [2024-11-06 15:41:34.423414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.681 [2024-11-06 15:41:34.423563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.681 [2024-11-06 15:41:34.423713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.681 [2024-11-06 15:41:34.423718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.681 [2024-11-06 15:41:34.423723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.681 [2024-11-06 15:41:34.423728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.681 [2024-11-06 15:41:34.435548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.681 [2024-11-06 15:41:34.436121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.681 [2024-11-06 15:41:34.436151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.681 [2024-11-06 15:41:34.436160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.681 [2024-11-06 15:41:34.436329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.681 [2024-11-06 15:41:34.436482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.681 [2024-11-06 15:41:34.436488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.681 [2024-11-06 15:41:34.436493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.681 [2024-11-06 15:41:34.436498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.681 [2024-11-06 15:41:34.448184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.681 [2024-11-06 15:41:34.448662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.681 [2024-11-06 15:41:34.448692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.681 [2024-11-06 15:41:34.448701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.681 [2024-11-06 15:41:34.448874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.681 [2024-11-06 15:41:34.449027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.681 [2024-11-06 15:41:34.449033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.681 [2024-11-06 15:41:34.449038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.681 [2024-11-06 15:41:34.449044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.681 [2024-11-06 15:41:34.460857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.681 [2024-11-06 15:41:34.461447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.681 [2024-11-06 15:41:34.461477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.681 [2024-11-06 15:41:34.461486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.681 [2024-11-06 15:41:34.461652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.681 [2024-11-06 15:41:34.461811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.681 [2024-11-06 15:41:34.461818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.681 [2024-11-06 15:41:34.461823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.681 [2024-11-06 15:41:34.461829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.681 [2024-11-06 15:41:34.473532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.681 [2024-11-06 15:41:34.474089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.681 [2024-11-06 15:41:34.474119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.681 [2024-11-06 15:41:34.474128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.681 [2024-11-06 15:41:34.474294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.681 [2024-11-06 15:41:34.474446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.682 [2024-11-06 15:41:34.474456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.682 [2024-11-06 15:41:34.474462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.682 [2024-11-06 15:41:34.474467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.682 [2024-11-06 15:41:34.486144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.682 [2024-11-06 15:41:34.486744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.682 [2024-11-06 15:41:34.486780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.682 [2024-11-06 15:41:34.486789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.682 [2024-11-06 15:41:34.486957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.682 [2024-11-06 15:41:34.487110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.682 [2024-11-06 15:41:34.487116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.682 [2024-11-06 15:41:34.487121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.682 [2024-11-06 15:41:34.487126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.682 [2024-11-06 15:41:34.498797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.682 [2024-11-06 15:41:34.499300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.682 [2024-11-06 15:41:34.499314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.682 [2024-11-06 15:41:34.499320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.682 [2024-11-06 15:41:34.499470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.682 [2024-11-06 15:41:34.499620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.682 [2024-11-06 15:41:34.499625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.682 [2024-11-06 15:41:34.499630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.682 [2024-11-06 15:41:34.499635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.682 [2024-11-06 15:41:34.511441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.682 [2024-11-06 15:41:34.511892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.682 [2024-11-06 15:41:34.511905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.682 [2024-11-06 15:41:34.511910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.682 [2024-11-06 15:41:34.512060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.682 [2024-11-06 15:41:34.512210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.682 [2024-11-06 15:41:34.512215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.682 [2024-11-06 15:41:34.512220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.682 [2024-11-06 15:41:34.512225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.682 [2024-11-06 15:41:34.524034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.682 [2024-11-06 15:41:34.524568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.682 [2024-11-06 15:41:34.524598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.682 [2024-11-06 15:41:34.524606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.682 [2024-11-06 15:41:34.524779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.682 [2024-11-06 15:41:34.524932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.682 [2024-11-06 15:41:34.524938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.682 [2024-11-06 15:41:34.524943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.682 [2024-11-06 15:41:34.524949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.682 [2024-11-06 15:41:34.536626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.682 [2024-11-06 15:41:34.537210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.682 [2024-11-06 15:41:34.537240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.682 [2024-11-06 15:41:34.537249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.682 [2024-11-06 15:41:34.537415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.682 [2024-11-06 15:41:34.537567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.682 [2024-11-06 15:41:34.537573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.682 [2024-11-06 15:41:34.537579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.682 [2024-11-06 15:41:34.537584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.682 [2024-11-06 15:41:34.549260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.682 [2024-11-06 15:41:34.549791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.682 [2024-11-06 15:41:34.549813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.682 [2024-11-06 15:41:34.549819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.682 [2024-11-06 15:41:34.549974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.682 [2024-11-06 15:41:34.550125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.682 [2024-11-06 15:41:34.550131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.682 [2024-11-06 15:41:34.550136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.682 [2024-11-06 15:41:34.550141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.682 [2024-11-06 15:41:34.561955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.682 [2024-11-06 15:41:34.562503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.682 [2024-11-06 15:41:34.562537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.682 [2024-11-06 15:41:34.562545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.682 [2024-11-06 15:41:34.562711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.682 [2024-11-06 15:41:34.562871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.682 [2024-11-06 15:41:34.562878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.682 [2024-11-06 15:41:34.562884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.682 [2024-11-06 15:41:34.562889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.682 [2024-11-06 15:41:34.574564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.682 [2024-11-06 15:41:34.575134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.682 [2024-11-06 15:41:34.575164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.682 [2024-11-06 15:41:34.575173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.682 [2024-11-06 15:41:34.575338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.682 [2024-11-06 15:41:34.575491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.682 [2024-11-06 15:41:34.575497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.682 [2024-11-06 15:41:34.575502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.682 [2024-11-06 15:41:34.575508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.682 [2024-11-06 15:41:34.587189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.682 [2024-11-06 15:41:34.587655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.682 [2024-11-06 15:41:34.587670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.682 [2024-11-06 15:41:34.587676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.682 [2024-11-06 15:41:34.587831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.682 [2024-11-06 15:41:34.587981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.683 [2024-11-06 15:41:34.587986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.683 [2024-11-06 15:41:34.587991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.683 [2024-11-06 15:41:34.587996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.683 [2024-11-06 15:41:34.599809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.683 [2024-11-06 15:41:34.600387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.683 [2024-11-06 15:41:34.600417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.683 [2024-11-06 15:41:34.600425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.683 [2024-11-06 15:41:34.600598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.683 [2024-11-06 15:41:34.600757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.683 [2024-11-06 15:41:34.600764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.683 [2024-11-06 15:41:34.600769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.683 [2024-11-06 15:41:34.600775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.683 [2024-11-06 15:41:34.612443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.683 [2024-11-06 15:41:34.613006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.683 [2024-11-06 15:41:34.613037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.683 [2024-11-06 15:41:34.613045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.683 [2024-11-06 15:41:34.613211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.683 [2024-11-06 15:41:34.613364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.683 [2024-11-06 15:41:34.613370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.683 [2024-11-06 15:41:34.613375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.683 [2024-11-06 15:41:34.613380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.683 [2024-11-06 15:41:34.625072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.683 [2024-11-06 15:41:34.625565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.683 [2024-11-06 15:41:34.625580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.683 [2024-11-06 15:41:34.625585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.683 [2024-11-06 15:41:34.625736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.683 [2024-11-06 15:41:34.625891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.683 [2024-11-06 15:41:34.625897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.683 [2024-11-06 15:41:34.625902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.683 [2024-11-06 15:41:34.625907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.683 [2024-11-06 15:41:34.637750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.683 [2024-11-06 15:41:34.638153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.683 [2024-11-06 15:41:34.638184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.683 [2024-11-06 15:41:34.638192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.683 [2024-11-06 15:41:34.638360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.683 [2024-11-06 15:41:34.638512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.683 [2024-11-06 15:41:34.638519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.683 [2024-11-06 15:41:34.638528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.683 [2024-11-06 15:41:34.638534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.683 [2024-11-06 15:41:34.650360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.683 [2024-11-06 15:41:34.650709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.683 [2024-11-06 15:41:34.650724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.683 [2024-11-06 15:41:34.650730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.683 [2024-11-06 15:41:34.650884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.683 [2024-11-06 15:41:34.651034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.683 [2024-11-06 15:41:34.651040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.683 [2024-11-06 15:41:34.651045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.683 [2024-11-06 15:41:34.651050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.945 [2024-11-06 15:41:34.663002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.946 [2024-11-06 15:41:34.663488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.946 [2024-11-06 15:41:34.663502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.946 [2024-11-06 15:41:34.663507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.946 [2024-11-06 15:41:34.663657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.946 [2024-11-06 15:41:34.663811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.946 [2024-11-06 15:41:34.663818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.946 [2024-11-06 15:41:34.663824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.946 [2024-11-06 15:41:34.663828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.946 [2024-11-06 15:41:34.675633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.946 [2024-11-06 15:41:34.676070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.946 [2024-11-06 15:41:34.676083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.946 [2024-11-06 15:41:34.676088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.946 [2024-11-06 15:41:34.676238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.946 [2024-11-06 15:41:34.676387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.946 [2024-11-06 15:41:34.676393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.946 [2024-11-06 15:41:34.676398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.946 [2024-11-06 15:41:34.676405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.946 [2024-11-06 15:41:34.688240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.946 [2024-11-06 15:41:34.688883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.946 [2024-11-06 15:41:34.688914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.946 [2024-11-06 15:41:34.688923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.946 [2024-11-06 15:41:34.689088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.946 [2024-11-06 15:41:34.689241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.946 [2024-11-06 15:41:34.689248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.946 [2024-11-06 15:41:34.689253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.946 [2024-11-06 15:41:34.689258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.946 [2024-11-06 15:41:34.700935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.946 [2024-11-06 15:41:34.701518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.946 [2024-11-06 15:41:34.701548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.946 [2024-11-06 15:41:34.701556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.946 [2024-11-06 15:41:34.701724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.946 [2024-11-06 15:41:34.701884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.946 [2024-11-06 15:41:34.701891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.946 [2024-11-06 15:41:34.701896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.946 [2024-11-06 15:41:34.701902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.946 [2024-11-06 15:41:34.713567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.946 [2024-11-06 15:41:34.714134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.946 [2024-11-06 15:41:34.714164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.946 [2024-11-06 15:41:34.714172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.946 [2024-11-06 15:41:34.714338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.946 [2024-11-06 15:41:34.714491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.946 [2024-11-06 15:41:34.714496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.946 [2024-11-06 15:41:34.714502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.946 [2024-11-06 15:41:34.714508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.946 [2024-11-06 15:41:34.726172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.946 [2024-11-06 15:41:34.726765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.946 [2024-11-06 15:41:34.726798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.946 [2024-11-06 15:41:34.726807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.946 [2024-11-06 15:41:34.726975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.946 [2024-11-06 15:41:34.727128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.946 [2024-11-06 15:41:34.727134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.946 [2024-11-06 15:41:34.727139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.946 [2024-11-06 15:41:34.727145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.946 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:16.946 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:29:16.946 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:16.946 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:16.946 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:16.946 [2024-11-06 15:41:34.738849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.946 [2024-11-06 15:41:34.739342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.946 [2024-11-06 15:41:34.739357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.946 [2024-11-06 15:41:34.739363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.946 [2024-11-06 15:41:34.739513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.946 [2024-11-06 15:41:34.739664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.946 [2024-11-06 15:41:34.739669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.946 [2024-11-06 15:41:34.739674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.946 [2024-11-06 15:41:34.739680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.946 [2024-11-06 15:41:34.751582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.946 [2024-11-06 15:41:34.751935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.946 [2024-11-06 15:41:34.751949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.946 [2024-11-06 15:41:34.751954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.946 [2024-11-06 15:41:34.752104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.946 [2024-11-06 15:41:34.752254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.946 [2024-11-06 15:41:34.752260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.946 [2024-11-06 15:41:34.752265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.946 [2024-11-06 15:41:34.752270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.946 [2024-11-06 15:41:34.764219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.946 [2024-11-06 15:41:34.764714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.946 [2024-11-06 15:41:34.764726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.946 [2024-11-06 15:41:34.764732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.947 [2024-11-06 15:41:34.764885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.947 [2024-11-06 15:41:34.765035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.947 [2024-11-06 15:41:34.765042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.947 [2024-11-06 15:41:34.765049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.947 [2024-11-06 15:41:34.765054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:16.947 [2024-11-06 15:41:34.773802] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:16.947 [2024-11-06 15:41:34.776864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.947 [2024-11-06 15:41:34.777310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.947 [2024-11-06 15:41:34.777340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.947 [2024-11-06 15:41:34.777349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.947 [2024-11-06 15:41:34.777514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.947 [2024-11-06 15:41:34.777667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.947 [2024-11-06 15:41:34.777673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.947 [2024-11-06 15:41:34.777678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.947 [2024-11-06 15:41:34.777684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:16.947 [2024-11-06 15:41:34.789500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.947 [2024-11-06 15:41:34.790038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.947 [2024-11-06 15:41:34.790053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.947 [2024-11-06 15:41:34.790058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.947 [2024-11-06 15:41:34.790209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.947 [2024-11-06 15:41:34.790359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.947 [2024-11-06 15:41:34.790368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.947 [2024-11-06 15:41:34.790373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.947 [2024-11-06 15:41:34.790379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.947 [2024-11-06 15:41:34.802183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.947 [2024-11-06 15:41:34.802652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.947 [2024-11-06 15:41:34.802681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.947 [2024-11-06 15:41:34.802690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.947 [2024-11-06 15:41:34.802865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.947 [2024-11-06 15:41:34.803019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.947 [2024-11-06 15:41:34.803025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.947 [2024-11-06 15:41:34.803030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.947 [2024-11-06 15:41:34.803036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.947 Malloc0 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:16.947 [2024-11-06 15:41:34.814848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.947 [2024-11-06 15:41:34.815430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.947 [2024-11-06 15:41:34.815460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.947 [2024-11-06 15:41:34.815469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.947 [2024-11-06 15:41:34.815636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.947 [2024-11-06 15:41:34.815795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.947 [2024-11-06 15:41:34.815802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.947 [2024-11-06 15:41:34.815808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.947 [2024-11-06 15:41:34.815813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:16.947 [2024-11-06 15:41:34.827480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.947 [2024-11-06 15:41:34.827874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.947 [2024-11-06 15:41:34.827907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba2280 with addr=10.0.0.2, port=4420 00:29:16.947 [2024-11-06 15:41:34.827916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba2280 is same with the state(6) to be set 00:29:16.947 [2024-11-06 15:41:34.828084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba2280 (9): Bad file descriptor 00:29:16.947 [2024-11-06 15:41:34.828237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.947 [2024-11-06 15:41:34.828243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.947 [2024-11-06 15:41:34.828248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.947 [2024-11-06 15:41:34.828254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:16.947 [2024-11-06 15:41:34.838606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:16.947 [2024-11-06 15:41:34.840084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.947 15:41:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3960135 00:29:16.947 [2024-11-06 15:41:34.907416] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:18.463 4833.29 IOPS, 18.88 MiB/s [2024-11-06T14:41:37.389Z] 5880.75 IOPS, 22.97 MiB/s [2024-11-06T14:41:38.772Z] 6694.11 IOPS, 26.15 MiB/s [2024-11-06T14:41:39.345Z] 7375.20 IOPS, 28.81 MiB/s [2024-11-06T14:41:40.730Z] 7891.18 IOPS, 30.82 MiB/s [2024-11-06T14:41:41.673Z] 8334.83 IOPS, 32.56 MiB/s [2024-11-06T14:41:42.615Z] 8728.77 IOPS, 34.10 MiB/s [2024-11-06T14:41:43.555Z] 9053.36 IOPS, 35.36 MiB/s 00:29:25.572 Latency(us) 00:29:25.572 [2024-11-06T14:41:43.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.572 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:25.572 Verification LBA range: start 0x0 length 0x4000 00:29:25.572 Nvme1n1 : 15.00 9334.00 36.46 13488.75 0.00 5590.12 411.31 15728.64 00:29:25.572 [2024-11-06T14:41:43.555Z] =================================================================================================================== 00:29:25.572 [2024-11-06T14:41:43.555Z] Total : 9334.00 36.46 13488.75 0.00 5590.12 411.31 15728.64 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:25.572 rmmod nvme_tcp 00:29:25.572 rmmod nvme_fabrics 00:29:25.572 rmmod nvme_keyring 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3961359 ']' 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3961359 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 3961359 ']' 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 3961359 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:25.572 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3961359 00:29:25.832 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:25.832 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:25.832 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3961359' 00:29:25.832 killing process with pid 3961359 00:29:25.832 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 3961359 00:29:25.832 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 3961359 00:29:25.832 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:25.832 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:25.832 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:25.832 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:25.832 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:25.832 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:25.832 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:25.832 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:25.832 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:25.832 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.832 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.832 15:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.508 15:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:28.508 00:29:28.508 real 0m28.378s 00:29:28.508 user 1m3.362s 00:29:28.508 sys 0m7.755s 00:29:28.508 15:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:28.508 15:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:28.508 ************************************ 00:29:28.508 END TEST nvmf_bdevperf 00:29:28.508 ************************************ 00:29:28.508 15:41:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:28.508 15:41:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:28.508 15:41:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:28.508 15:41:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.508 ************************************ 00:29:28.508 START TEST nvmf_target_disconnect 00:29:28.508 ************************************ 00:29:28.508 15:41:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:28.508 * Looking for test storage... 00:29:28.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:28.508 15:41:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:28.508 15:41:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:29:28.508 15:41:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:28.508 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:28.508 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:28.508 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:28.508 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:28.508 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.508 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:28.508 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:28.508 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:28.508 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:28.508 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:28.508 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:28.508 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:28.508 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:28.508 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:28.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.509 --rc genhtml_branch_coverage=1 00:29:28.509 --rc genhtml_function_coverage=1 00:29:28.509 --rc genhtml_legend=1 00:29:28.509 --rc geninfo_all_blocks=1 00:29:28.509 --rc geninfo_unexecuted_blocks=1 00:29:28.509 00:29:28.509 ' 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:28.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.509 --rc genhtml_branch_coverage=1 00:29:28.509 --rc genhtml_function_coverage=1 00:29:28.509 --rc genhtml_legend=1 00:29:28.509 --rc geninfo_all_blocks=1 00:29:28.509 --rc geninfo_unexecuted_blocks=1 00:29:28.509 00:29:28.509 ' 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:28.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.509 --rc genhtml_branch_coverage=1 00:29:28.509 --rc genhtml_function_coverage=1 00:29:28.509 --rc genhtml_legend=1 00:29:28.509 --rc geninfo_all_blocks=1 00:29:28.509 --rc geninfo_unexecuted_blocks=1 00:29:28.509 00:29:28.509 ' 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:28.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.509 --rc genhtml_branch_coverage=1 00:29:28.509 --rc genhtml_function_coverage=1 00:29:28.509 --rc genhtml_legend=1 00:29:28.509 --rc geninfo_all_blocks=1 00:29:28.509 --rc geninfo_unexecuted_blocks=1 00:29:28.509 00:29:28.509 ' 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:28.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:28.509 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:28.510 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.510 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.510 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.510 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:28.510 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:28.510 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:28.510 15:41:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:36.660 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:36.660 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:36.660 Found net devices under 0000:31:00.0: cvl_0_0 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.660 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:36.660 Found net devices under 0000:31:00.1: cvl_0_1 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:36.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:36.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:29:36.661 00:29:36.661 --- 10.0.0.2 ping statistics --- 00:29:36.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.661 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:36.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:36.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:29:36.661 00:29:36.661 --- 10.0.0.1 ping statistics --- 00:29:36.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.661 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:36.661 ************************************ 00:29:36.661 START TEST nvmf_target_disconnect_tc1 00:29:36.661 ************************************ 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:36.661 [2024-11-06 15:41:53.926728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.661 [2024-11-06 15:41:53.926804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d25f60 with addr=10.0.0.2, port=4420 00:29:36.661 [2024-11-06 15:41:53.926831] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:36.661 [2024-11-06 15:41:53.926843] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:36.661 [2024-11-06 15:41:53.926856] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:36.661 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:36.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:36.661 Initializing NVMe Controllers 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:36.661 00:29:36.661 real 0m0.145s 00:29:36.661 user 0m0.064s 00:29:36.661 sys 0m0.080s 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:36.661 ************************************ 00:29:36.661 END TEST nvmf_target_disconnect_tc1 00:29:36.661 ************************************ 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:36.661 15:41:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:36.661 ************************************ 00:29:36.661 START TEST nvmf_target_disconnect_tc2 00:29:36.661 ************************************ 00:29:36.661 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:29:36.661 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:36.661 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:36.661 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:36.661 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:36.661 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.661 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3967548 00:29:36.661 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3967548 00:29:36.661 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:36.661 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3967548 ']' 00:29:36.661 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.662 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:36.662 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.662 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:36.662 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.662 [2024-11-06 15:41:54.098237] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:29:36.662 [2024-11-06 15:41:54.098327] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.662 [2024-11-06 15:41:54.199597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:36.662 [2024-11-06 15:41:54.253207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.662 [2024-11-06 15:41:54.253260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.662 [2024-11-06 15:41:54.253268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:36.662 [2024-11-06 15:41:54.253276] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:36.662 [2024-11-06 15:41:54.253282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.662 [2024-11-06 15:41:54.255209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:36.662 [2024-11-06 15:41:54.255370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:36.662 [2024-11-06 15:41:54.255529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:36.662 [2024-11-06 15:41:54.255530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:37.235 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:37.235 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:37.235 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:37.235 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:37.235 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.235 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:37.235 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:37.235 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.235 15:41:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.235 Malloc0 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.235 [2024-11-06 15:41:55.012435] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.235 [2024-11-06 15:41:55.052880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3967606 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:37.235 15:41:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:39.171 15:41:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3967548 00:29:39.171 15:41:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Write completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Write completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Write completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Write completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Write completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Write completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.171 Read completed with error (sct=0, sc=8) 00:29:39.171 starting I/O failed 00:29:39.172 Read completed with error (sct=0, sc=8) 00:29:39.172 starting I/O failed 00:29:39.172 Write completed with error (sct=0, sc=8) 00:29:39.172 starting I/O failed 00:29:39.172 Write completed with error (sct=0, sc=8) 00:29:39.172 starting I/O failed 00:29:39.172 [2024-11-06 15:41:57.091811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.172 [2024-11-06 15:41:57.092314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.172 [2024-11-06 15:41:57.092337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.172 qpair failed and we were unable to recover it. 00:29:39.172 [2024-11-06 15:41:57.092680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.172 [2024-11-06 15:41:57.092692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.172 qpair failed and we were unable to recover it. 00:29:39.172 [2024-11-06 15:41:57.093075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.172 [2024-11-06 15:41:57.093138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.172 qpair failed and we were unable to recover it. 00:29:39.172 [2024-11-06 15:41:57.093493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.172 [2024-11-06 15:41:57.093507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.172 qpair failed and we were unable to recover it. 00:29:39.172 [2024-11-06 15:41:57.094003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.172 [2024-11-06 15:41:57.094066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.172 qpair failed and we were unable to recover it. 00:29:39.172 [2024-11-06 15:41:57.094446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.172 [2024-11-06 15:41:57.094461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.172 qpair failed and we were unable to recover it. 00:29:39.172 [2024-11-06 15:41:57.094801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.172 [2024-11-06 15:41:57.094842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.172 qpair failed and we were unable to recover it. 00:29:39.172 [2024-11-06 15:41:57.095235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.172 [2024-11-06 15:41:57.095247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.172 qpair failed and we were unable to recover it. 00:29:39.172 [2024-11-06 15:41:57.095551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.172 [2024-11-06 15:41:57.095562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.172 qpair failed and we were unable to recover it. 00:29:39.172 [2024-11-06 15:41:57.095896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.172 [2024-11-06 15:41:57.095907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.172 qpair failed and we were unable to recover it. 00:29:39.172 [2024-11-06 15:41:57.096243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.172 [2024-11-06 15:41:57.096255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.172 qpair failed and we were unable to recover it. 00:29:39.172 [2024-11-06 15:41:57.096471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.172 [2024-11-06 15:41:57.096483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.172 qpair failed and we were unable to recover it. 00:29:39.172 [2024-11-06 15:41:57.096778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.172 [2024-11-06 15:41:57.096795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.172 qpair failed and we were unable to recover it. 00:29:39.172 [2024-11-06 15:41:57.097220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.172 [2024-11-06 15:41:57.097232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.172 qpair failed and we were unable to recover it. 00:29:39.172 [2024-11-06 15:41:57.097580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.172 [2024-11-06 15:41:57.097591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.172 qpair failed and we were unable to recover it. 00:29:39.172 [2024-11-06 15:41:57.097990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.172 [2024-11-06 15:41:57.098002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.172 qpair failed and we were unable to recover it. 00:29:39.172 [2024-11-06 15:41:57.098341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.172 [2024-11-06 15:41:57.098351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.172 qpair failed and we were unable to recover it. 00:29:39.172 [2024-11-06 15:41:57.098604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.172 [2024-11-06 15:41:57.098615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.172 qpair failed and we were unable to recover it. 00:29:39.172 [2024-11-06 15:41:57.099004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.172 [2024-11-06 15:41:57.099016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.172 qpair failed and we were unable to recover it. 00:29:39.172 [2024-11-06 15:41:57.099297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.172 [2024-11-06 15:41:57.099308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.172 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.099630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.099640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.099950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.099961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.100285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.100296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.100608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.100619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.100959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.100971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.101376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.101396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.101673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.101683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.102058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.102069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.102400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.102410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.102770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.102782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.103152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.103164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.103492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.103502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.103880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.103891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.104249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.104262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.104586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.104597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.104805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.104816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.105156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.105166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.105565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.105575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.105895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.105906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.106221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.106231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.106545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.106558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.106805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.106816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.106988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.173 [2024-11-06 15:41:57.107000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.173 qpair failed and we were unable to recover it. 00:29:39.173 [2024-11-06 15:41:57.107351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.174 [2024-11-06 15:41:57.107361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.174 qpair failed and we were unable to recover it. 00:29:39.174 [2024-11-06 15:41:57.107592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.174 [2024-11-06 15:41:57.107603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.174 qpair failed and we were unable to recover it. 00:29:39.174 [2024-11-06 15:41:57.107851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.174 [2024-11-06 15:41:57.107863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.174 qpair failed and we were unable to recover it. 00:29:39.174 [2024-11-06 15:41:57.108249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.174 [2024-11-06 15:41:57.108260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.174 qpair failed and we were unable to recover it. 00:29:39.174 [2024-11-06 15:41:57.108567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.174 [2024-11-06 15:41:57.108581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.174 qpair failed and we were unable to recover it. 00:29:39.174 [2024-11-06 15:41:57.108813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.174 [2024-11-06 15:41:57.108824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.174 qpair failed and we were unable to recover it. 00:29:39.174 [2024-11-06 15:41:57.109227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.174 [2024-11-06 15:41:57.109239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.174 qpair failed and we were unable to recover it. 00:29:39.174 [2024-11-06 15:41:57.109597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.174 [2024-11-06 15:41:57.109612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.174 qpair failed and we were unable to recover it. 00:29:39.174 [2024-11-06 15:41:57.109848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.174 [2024-11-06 15:41:57.109859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.174 qpair failed and we were unable to recover it. 00:29:39.174 [2024-11-06 15:41:57.110242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.174 [2024-11-06 15:41:57.110256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.174 qpair failed and we were unable to recover it. 00:29:39.174 [2024-11-06 15:41:57.110639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.174 [2024-11-06 15:41:57.110651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.174 qpair failed and we were unable to recover it. 00:29:39.174 [2024-11-06 15:41:57.111027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.174 [2024-11-06 15:41:57.111038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.174 qpair failed and we were unable to recover it. 00:29:39.174 [2024-11-06 15:41:57.111414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.174 [2024-11-06 15:41:57.111424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.174 qpair failed and we were unable to recover it. 00:29:39.174 [2024-11-06 15:41:57.111742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.174 [2024-11-06 15:41:57.111775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.174 qpair failed and we were unable to recover it. 00:29:39.174 [2024-11-06 15:41:57.112105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.174 [2024-11-06 15:41:57.112125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.174 qpair failed and we were unable to recover it. 00:29:39.174 [2024-11-06 15:41:57.112355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.174 [2024-11-06 15:41:57.112365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.174 qpair failed and we were unable to recover it. 00:29:39.174 [2024-11-06 15:41:57.112685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.174 [2024-11-06 15:41:57.112696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.174 qpair failed and we were unable to recover it. 00:29:39.174 [2024-11-06 15:41:57.113016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.174 [2024-11-06 15:41:57.113026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.174 qpair failed and we were unable to recover it. 00:29:39.174 [2024-11-06 15:41:57.113338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.174 [2024-11-06 15:41:57.113350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.174 qpair failed and we were unable to recover it. 00:29:39.174 [2024-11-06 15:41:57.113707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.174 [2024-11-06 15:41:57.113718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.174 qpair failed and we were unable to recover it. 00:29:39.174 [2024-11-06 15:41:57.114121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.175 [2024-11-06 15:41:57.114132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.175 qpair failed and we were unable to recover it. 00:29:39.175 [2024-11-06 15:41:57.114490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.175 [2024-11-06 15:41:57.114500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.175 qpair failed and we were unable to recover it. 00:29:39.175 [2024-11-06 15:41:57.114821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.175 [2024-11-06 15:41:57.114832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.175 qpair failed and we were unable to recover it. 00:29:39.175 [2024-11-06 15:41:57.115169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.175 [2024-11-06 15:41:57.115181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.175 qpair failed and we were unable to recover it. 00:29:39.175 [2024-11-06 15:41:57.115515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.175 [2024-11-06 15:41:57.115526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.175 qpair failed and we were unable to recover it. 00:29:39.175 [2024-11-06 15:41:57.115758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.175 [2024-11-06 15:41:57.115770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.116084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.116095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.116425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.116435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.116777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.116788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.117231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.117242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.117524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.117534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.117890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.117903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.118214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.118225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.118553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.118563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.118875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.118886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.119196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.119206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.119541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.119555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.119871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.119886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.120277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.120290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.120627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.120642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.120960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.120973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.121366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.121379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.121741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.121765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.122177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.122192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.122531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.122545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.122885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.122898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.123090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.123105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.123466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.123478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.123589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.123600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.176 qpair failed and we were unable to recover it. 00:29:39.176 [2024-11-06 15:41:57.123928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.176 [2024-11-06 15:41:57.123942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.124218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.124231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.124568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.124581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.124784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.124797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.125128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.125142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.125447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.125460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.125688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.125701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.126037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.126050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.126309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.126322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.126654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.126668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.126992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.127006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.127319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.127333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.127730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.127744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.128093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.128106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.128417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.128430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.128768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.128783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.129037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.129050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.129431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.129445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.129761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.129774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.130006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.130018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.130373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.130386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.130585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.130597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.177 qpair failed and we were unable to recover it. 00:29:39.177 [2024-11-06 15:41:57.130920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.177 [2024-11-06 15:41:57.130936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.131233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.131250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.131451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.131468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.131835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.131854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.132068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.132088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.132401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.132418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.132769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.132791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.133117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.133135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.133355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.133373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.133668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.133687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.133953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.133972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.134241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.134259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.134585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.134603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.134850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.134867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.135236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.135253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.135479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.135496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.135722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.135739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.136114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.136131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.136501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.136518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.136820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.136838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.137190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.137207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.137528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.137554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.137772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.137790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.138121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.138140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.138364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.138382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.138715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.178 [2024-11-06 15:41:57.138732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.178 qpair failed and we were unable to recover it. 00:29:39.178 [2024-11-06 15:41:57.139069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.139087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.139440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.139457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.139848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.139871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.140194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.140211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.140440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.140456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.140846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.140864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.141176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.141193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.141565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.141586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.141922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.141940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.142273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.142294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.142531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.142552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.142872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.142895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.143231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.143252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.143580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.143600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.143925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.143949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.144285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.144307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.144549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.144570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.144926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.144950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.145285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.145308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.145661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.145684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.146036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.146059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.146423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.146445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.179 qpair failed and we were unable to recover it. 00:29:39.179 [2024-11-06 15:41:57.146791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.179 [2024-11-06 15:41:57.146813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.182 qpair failed and we were unable to recover it. 00:29:39.458 [2024-11-06 15:41:57.147160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.458 [2024-11-06 15:41:57.147185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.458 qpair failed and we were unable to recover it. 00:29:39.458 [2024-11-06 15:41:57.147444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.458 [2024-11-06 15:41:57.147467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.458 qpair failed and we were unable to recover it. 00:29:39.458 [2024-11-06 15:41:57.147789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.458 [2024-11-06 15:41:57.147812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.458 qpair failed and we were unable to recover it. 00:29:39.458 [2024-11-06 15:41:57.148198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.458 [2024-11-06 15:41:57.148220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.458 qpair failed and we were unable to recover it. 00:29:39.458 [2024-11-06 15:41:57.148555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.458 [2024-11-06 15:41:57.148577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.458 qpair failed and we were unable to recover it. 00:29:39.458 [2024-11-06 15:41:57.148910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.458 [2024-11-06 15:41:57.148933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.458 qpair failed and we were unable to recover it. 00:29:39.458 [2024-11-06 15:41:57.149256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.458 [2024-11-06 15:41:57.149278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.458 qpair failed and we were unable to recover it. 00:29:39.458 [2024-11-06 15:41:57.149542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.458 [2024-11-06 15:41:57.149563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.458 qpair failed and we were unable to recover it. 00:29:39.458 [2024-11-06 15:41:57.149906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.458 [2024-11-06 15:41:57.149929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.458 qpair failed and we were unable to recover it. 00:29:39.458 [2024-11-06 15:41:57.150276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.458 [2024-11-06 15:41:57.150297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.458 qpair failed and we were unable to recover it. 00:29:39.458 [2024-11-06 15:41:57.150626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.458 [2024-11-06 15:41:57.150649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.458 qpair failed and we were unable to recover it. 00:29:39.458 [2024-11-06 15:41:57.151074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.458 [2024-11-06 15:41:57.151097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.458 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.151434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.151457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.151804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.151827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.152043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.152067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.152455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.152477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.152699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.152721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.153040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.153063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.153488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.153511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.153836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.153858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.154184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.154212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.154591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.154620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.154860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.154891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.155249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.155278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.155640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.155669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.156042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.156073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.156331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.156364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.156741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.156784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.157172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.157201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.157553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.157583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.157935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.157966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.158315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.158344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.158699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.158728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.159074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.159104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.159467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.159497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.159873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.159903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.160242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.160271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.160669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.160699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.161011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.161040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.161381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.161412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.161762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.161796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.162158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.162187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.162522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.162551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.162811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.162846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.163218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.163247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.163696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.163726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.164103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.164133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.164494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.164524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.164887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.164917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.165280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.459 [2024-11-06 15:41:57.165310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.459 qpair failed and we were unable to recover it. 00:29:39.459 [2024-11-06 15:41:57.165669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.165701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.166125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.166155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.166559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.166594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.166948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.166980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.167353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.167381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.167766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.167796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.168158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.168187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.168565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.168595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.168971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.169001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.169362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.169391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.169769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.169799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.170162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.170191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.170555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.170583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.170973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.171005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.171348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.171378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.171761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.171791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.172205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.172235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.172581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.172610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.172884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.172914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.173288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.173317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.173674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.173701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.173969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.173999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.174342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.174371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.174624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.174653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.175003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.175034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.175394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.175423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.175780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.175810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.176172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.176201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.176451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.176483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.176883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.176920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.177276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.177307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.177641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.177669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.178050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.178081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.178440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.178467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.178819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.178851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.179232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.179262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.179635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.179663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.180010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.180041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.460 [2024-11-06 15:41:57.180403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.460 [2024-11-06 15:41:57.180432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.460 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.180794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.180823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.181226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.181254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.181612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.181642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.182005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.182035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.182361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.182390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.182744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.182793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.183161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.183190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.183440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.183468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.183819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.183850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.184130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.184159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.184537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.184566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.184928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.184959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.185308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.185336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.185698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.185726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.186113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.186142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.186501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.186530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.186880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.186910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.187159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.187195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.187542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.187571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.188017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.188048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.188413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.188443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.188809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.188841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.189216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.189245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.189600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.189629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.189981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.190013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.190260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.190291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.190652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.190683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.191046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.191075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.191434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.191462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.191815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.191846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.192249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.192278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.192631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.192663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.193008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.193040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.193399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.193429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.193811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.193841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.194194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.194231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.194595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.194624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.194969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.194999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.461 [2024-11-06 15:41:57.195261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.461 [2024-11-06 15:41:57.195289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.461 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.195667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.195696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.196077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.196108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.196452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.196482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.196848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.196879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.197225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.197254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.197615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.197645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.198080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.198111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.198448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.198478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.198816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.198847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.199183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.199213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.199585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.199615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.200008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.200038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.200381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.200409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.200775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.200805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.201149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.201178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.201585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.201616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.201967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.201998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.202357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.202387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.202806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.202836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.203197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.203226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.203591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.203619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.203972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.204005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.204379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.204409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.204767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.204798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.205150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.205180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.205542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.205570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.205940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.205971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.206339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.206369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.206739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.206851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.207192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.207221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.207556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.207584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.207842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.207873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.208226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.208255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.208566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.208597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.208958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.208988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.209347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.209375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.209638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.209665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.210067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.462 [2024-11-06 15:41:57.210098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.462 qpair failed and we were unable to recover it. 00:29:39.462 [2024-11-06 15:41:57.210442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.210471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.210825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.210856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.211211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.211242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.211610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.211638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.212020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.212052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.212416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.212444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.212791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.212821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.213217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.213247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.213595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.213632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.213878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.213912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.214249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.214279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.214635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.214664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.214915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.214944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.215290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.215318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.215687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.215717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.216100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.216132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.216530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.216559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.216920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.216950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.217312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.217340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.217770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.217803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.218155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.218184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.218553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.218582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.218951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.218982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.219342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.219370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.219800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.219831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.220187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.220217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.220577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.220606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.220969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.220999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.221366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.221395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.221767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.221797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.222121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.222150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.222514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.222545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.222803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.222837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.463 [2024-11-06 15:41:57.223220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.463 [2024-11-06 15:41:57.223249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.463 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.223619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.223647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.224067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.224104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.224397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.224426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.224780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.224811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.225163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.225194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.225558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.225588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.225965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.225995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.226356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.226384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.226767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.226799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.227155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.227186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.227547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.227576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.227942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.227974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.228332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.228362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.228732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.228776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.229109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.229139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.231592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.231666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.232116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.232155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.232522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.232552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.232924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.232955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.233304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.233334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.233699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.233729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.234106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.234138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.234505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.234536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.234908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.234939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.235391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.235420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.235740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.235784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.236110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.236141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.236513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.236543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.236897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.236929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.237296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.237326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.237684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.237713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.238097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.238129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.238393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.238424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.238799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.238831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.239193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.239223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.239591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.239622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.239984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.240016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.240414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.464 [2024-11-06 15:41:57.240443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-11-06 15:41:57.240781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.240814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.241216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.241248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.241605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.241635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.241988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.242021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.242390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.242420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.242792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.242825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.243185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.243216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.243579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.243611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.243889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.243924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.244306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.244336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.244693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.244721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.245097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.245127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.245482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.245513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.245879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.245912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.246257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.246286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.246642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.246671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.247042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.247073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.247508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.247539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.247881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.247911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.248288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.248317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.248680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.248712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.249083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.249115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.249376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.249407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.249647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.249676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.250008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.250039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.250300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.250328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.250682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.250712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.251077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.251107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.251468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.251497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.251870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.251901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.252261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.252290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.252742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.252789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.253158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.253188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.253574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.253603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.253977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.254009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.254373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.254403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.254783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.254814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.255139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.255169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-11-06 15:41:57.255533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.465 [2024-11-06 15:41:57.255562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.255923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.255954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.256312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.256341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.256704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.256733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.257110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.257138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.257518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.257548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.257909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.257941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.258313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.258343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.258683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.258713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.259181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.259213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.259560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.259591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.259966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.259997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.260345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.260375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.260711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.260740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.261083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.261113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.261490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.261521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.261869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.261902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.262278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.262308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.262667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.262697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.263093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.263123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.263483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.263518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.263871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.263902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.264164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.264193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.264538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.264568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.264912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.264942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.265284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.265314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.265567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.265600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.266068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.266100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.266440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.266471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.266716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.266765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.267125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.267156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.267368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.267398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.267770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.267801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.268044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.268073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.268426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.268458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.268789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.268820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.269201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.269231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.269603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.269632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.269987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.466 [2024-11-06 15:41:57.270017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.466 qpair failed and we were unable to recover it. 00:29:39.466 [2024-11-06 15:41:57.270377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.270407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.270781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.270814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.271219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.271248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.271688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.271717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.272089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.272120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.272565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.272594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.272917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.272947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.273152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.273181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.273570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.273606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.273940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.273972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.274334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.274365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.274710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.274739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.275152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.275181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.275423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.275454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.275828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.275858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.276205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.276234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.276493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.276522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.276913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.276943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.277325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.277355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.277621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.277649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.278063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.278094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.278454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.278483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.278841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.278872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.279260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.279290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.279654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.279684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.280034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.280064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.280430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.280459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.280829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.280858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.281127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.281156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.281499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.281529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.281774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.281807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.282206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.282236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.282602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.282632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.282997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.283028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.467 [2024-11-06 15:41:57.283410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.467 [2024-11-06 15:41:57.283439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.467 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.283803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.283834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.284083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.284112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.284493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.284522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.284974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.285004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.285351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.285379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.285771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.285801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.286077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.286106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.286460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.286489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.286852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.286883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.287251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.287282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.287646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.287675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.288058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.288088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.288454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.288484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.288825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.288856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.289264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.289293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.289718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.289758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.290125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.290156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.290410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.290439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.290816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.290846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.291194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.291226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.291579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.291608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.291972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.292003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.292356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.292384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.292768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.292799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.293142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.293173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.293532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.293562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.293903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.293934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.294280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.294309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.294670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.294699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.295048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.295079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.295427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.295457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.295817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.295848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.296214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.296243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.296601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.296630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.296917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.296947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.297304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.297333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.297598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.297628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.297855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.297889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.468 [2024-11-06 15:41:57.298248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.468 [2024-11-06 15:41:57.298277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.468 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.298622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.298651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.299032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.299062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.299422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.299456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.299806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.299837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.300281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.300312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.300673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.300702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.302530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.302593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.303029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.303065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.303336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.303366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.303720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.303765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.304126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.304155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.304505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.304535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.304894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.304927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.305270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.305299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.305650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.305678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.306058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.306088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.306447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.306476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.306830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.306860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.307780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.307827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.308224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.308258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.308716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.308770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.309181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.309210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.309578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.309608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.310035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.310068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.310307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.310341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.310692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.310724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.311093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.311123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.311490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.311519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.311895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.311925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.312293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.312332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.312670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.312700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.313086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.313117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.313472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.313501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.313873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.313904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.314358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.314387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.314633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.314662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.314989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.315020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.315385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.469 [2024-11-06 15:41:57.315414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.469 qpair failed and we were unable to recover it. 00:29:39.469 [2024-11-06 15:41:57.315772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.315802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.316168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.316196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.316558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.316589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.316830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.316863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.317263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.317292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.317645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.317676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.318076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.318107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.318469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.318498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.318873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.318903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.319289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.319319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.319562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.319591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.319937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.319967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.320277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.320306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.320672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.320700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.321120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.321150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.321492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.321531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.321853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.321885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.322225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.322256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.322627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.322656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.323036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.323067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.323422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.323451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.323826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.323858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.324230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.324259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.324623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.324651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.325018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.325048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.325426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.325455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.325804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.325837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.326207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.326235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.326573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.326604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.326981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.327012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.327350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.327380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.327773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.327803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.328151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.328182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.328553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.328583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.328960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.328992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.329353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.329381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.329739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.329788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.330150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.330180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.330540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.470 [2024-11-06 15:41:57.330569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.470 qpair failed and we were unable to recover it. 00:29:39.470 [2024-11-06 15:41:57.330926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.330959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.332923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.332987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.333348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.333383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.333766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.333799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.334162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.334191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.334549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.334577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.334933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.334964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.335219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.335248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.335605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.335635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.335984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.336014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.336383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.336413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.336775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.336805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.337140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.337169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.337534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.337564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.337924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.337953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.338296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.338326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.338697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.338726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.339081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.339111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.339364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.339392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.339820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.339851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.340199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.340234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.340604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.340635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.341032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.341063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.341424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.341453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.341792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.341822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.342202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.342230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.342572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.342602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.342973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.343005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.343246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.343274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.343637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.343665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.344017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.344050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.344442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.344471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.344834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.344864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.345314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.345344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.345807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.345839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.346105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.346134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.346408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.346448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.346788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.346818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.471 qpair failed and we were unable to recover it. 00:29:39.471 [2024-11-06 15:41:57.347204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.471 [2024-11-06 15:41:57.347234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.347608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.347637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.347983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.348023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.348392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.348422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.348813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.348845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.349207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.349236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.349485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.349514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.349874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.349906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.350296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.350326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.350688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.350726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.351070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.351104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.351454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.351493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.351777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.351808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.352188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.352218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.352566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.352596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.352941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.352972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.353238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.353267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.353649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.353678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.354138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.354168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.354507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.354538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.354811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.354842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.355174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.355206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.355559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.355588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.355850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.355886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.356292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.356322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.356675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.356704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.356960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.356992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.357356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.357386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.357759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.357792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.358039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.358068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.358425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.358455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.358672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.358703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.359102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.359132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.359473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.359504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.472 qpair failed and we were unable to recover it. 00:29:39.472 [2024-11-06 15:41:57.359878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.472 [2024-11-06 15:41:57.359910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.360298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.360328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.360703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.360741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.361140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.361171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.361516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.361545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.361914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.361945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.362294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.362325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.362691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.362721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.363039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.363069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.363413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.363444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.363735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.363780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.364058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.364088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.364444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.364477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.364730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.364774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.365033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.365067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.365387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.365417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.365568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.365598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.365982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.366015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.366376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.366405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.366767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.366798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.367192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.367222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.367605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.367636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.367984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.368016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.368274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.368304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.368725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.368771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.369158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.369188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.369434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.369467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.369797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.369830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.370108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.370138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.370489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.370518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.370872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.370905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.371287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.371316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.371673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.371703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.372069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.372099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.372469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.372498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.372870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.372902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.373276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.373306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.373679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.373709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.374085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.473 [2024-11-06 15:41:57.374118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.473 qpair failed and we were unable to recover it. 00:29:39.473 [2024-11-06 15:41:57.374501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.374530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.374879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.374910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.375293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.375324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.375684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.375713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.376051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.376082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.376421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.376451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.376810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.376843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.377205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.377235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.377586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.377617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.377878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.377910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.378262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.378292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.378644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.378674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.379025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.379057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.379407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.379438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.379797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.379828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.380153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.380183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.380535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.380566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.380838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.380868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.381249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.381282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.381650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.381680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.382038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.382069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.382321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.382350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.382705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.382735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.383103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.383133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.383485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.383517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.383775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.383807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.384156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.384195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.384533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.384563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.384906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.384935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.385299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.385328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.385669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.385698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.386056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.386094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.386369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.386398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.386736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.386780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.387156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.387185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.387546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.387575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.387925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.387958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.388135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.388164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.474 [2024-11-06 15:41:57.388560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.474 [2024-11-06 15:41:57.388592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.474 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.388924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.388956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.389199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.389227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.389610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.389640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.390026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.390059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.390416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.390446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.390811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.390843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.390983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.391012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.391388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.391417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.391647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.391676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.391933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.391965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.392307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.392336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.392707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.392738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.393122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.393151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.393589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.393620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.393994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.394027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.394403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.394432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.394796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.394830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.395212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.395243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.395597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.395628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.395916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.395955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.396221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.396255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.396624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.396656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.397009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.397042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.397394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.397424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.397803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.397834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.398197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.398230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.398575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.398605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.398986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.399017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.399369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.399398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.399765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.399797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.400129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.400159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.400554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.400585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.400926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.400958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.401333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.401363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.401720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.401768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.402136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.402164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.402537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.402569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.402937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.402970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.475 [2024-11-06 15:41:57.403314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.475 [2024-11-06 15:41:57.403345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.475 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.403693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.403724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.404098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.404129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.404508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.404537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.404915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.404945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.405303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.405332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.405668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.405699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.406019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.406049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.406399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.406428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.406797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.406827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.407200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.407231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.407492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.407521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.407774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.407806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.408214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.408245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.408503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.408534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.408808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.408839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.409108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.409138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.409479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.409508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.409873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.409904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.410272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.410300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.410647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.410676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.411058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.411088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.411360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.411390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.411771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.411801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.412182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.412212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.412555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.412583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.412820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.412851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.413132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.413160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.413434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.413462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.413681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.413710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.413932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.413962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.414374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.414403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.414769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.414801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.415164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.415192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.415545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.415575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.476 qpair failed and we were unable to recover it. 00:29:39.476 [2024-11-06 15:41:57.415830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.476 [2024-11-06 15:41:57.415861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.416220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.416249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.416597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.416626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.417021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.417052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.417408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.417436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.417850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.417880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.418271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.418300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.418582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.418609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.418948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.418978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.419354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.419383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.419795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.419824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.420077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.420106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.420399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.420428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.420791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.420822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.421197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.421231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.421437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.421466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.421845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.421877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.422238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.422268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.422436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.422465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.422734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.422777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.423195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.423224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.423640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.423668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.424021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.424060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.424409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.424439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.424813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.424844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.425236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.425265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.425516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.425544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.425889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.425918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.426270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.426300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.426678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.426707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.427142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.427171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.427536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.427565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.477 [2024-11-06 15:41:57.427831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.477 [2024-11-06 15:41:57.427860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.477 qpair failed and we were unable to recover it. 00:29:39.752 [2024-11-06 15:41:57.428280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.752 [2024-11-06 15:41:57.428312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.752 qpair failed and we were unable to recover it. 00:29:39.752 [2024-11-06 15:41:57.428564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.752 [2024-11-06 15:41:57.428593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.752 qpair failed and we were unable to recover it. 00:29:39.752 [2024-11-06 15:41:57.428844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.752 [2024-11-06 15:41:57.428876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.752 qpair failed and we were unable to recover it. 00:29:39.752 [2024-11-06 15:41:57.429153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.752 [2024-11-06 15:41:57.429182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.752 qpair failed and we were unable to recover it. 00:29:39.752 [2024-11-06 15:41:57.429585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.752 [2024-11-06 15:41:57.429613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.752 qpair failed and we were unable to recover it. 00:29:39.752 [2024-11-06 15:41:57.429979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.752 [2024-11-06 15:41:57.430010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.752 qpair failed and we were unable to recover it. 00:29:39.752 [2024-11-06 15:41:57.430400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.752 [2024-11-06 15:41:57.430429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.752 qpair failed and we were unable to recover it. 00:29:39.752 [2024-11-06 15:41:57.430780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.752 [2024-11-06 15:41:57.430811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.752 qpair failed and we were unable to recover it. 00:29:39.752 [2024-11-06 15:41:57.431147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.752 [2024-11-06 15:41:57.431182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.752 qpair failed and we were unable to recover it. 00:29:39.752 [2024-11-06 15:41:57.431448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.752 [2024-11-06 15:41:57.431476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.752 qpair failed and we were unable to recover it. 00:29:39.752 [2024-11-06 15:41:57.431721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.752 [2024-11-06 15:41:57.431765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.752 qpair failed and we were unable to recover it. 00:29:39.752 [2024-11-06 15:41:57.432137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.752 [2024-11-06 15:41:57.432166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.752 qpair failed and we were unable to recover it. 00:29:39.752 [2024-11-06 15:41:57.432440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.752 [2024-11-06 15:41:57.432469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.752 qpair failed and we were unable to recover it. 00:29:39.752 [2024-11-06 15:41:57.432722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.752 [2024-11-06 15:41:57.432764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.752 qpair failed and we were unable to recover it. 00:29:39.752 [2024-11-06 15:41:57.432982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.752 [2024-11-06 15:41:57.433011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.752 qpair failed and we were unable to recover it. 00:29:39.752 [2024-11-06 15:41:57.433371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.752 [2024-11-06 15:41:57.433401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.752 qpair failed and we were unable to recover it. 00:29:39.752 [2024-11-06 15:41:57.433768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.752 [2024-11-06 15:41:57.433799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.752 qpair failed and we were unable to recover it. 00:29:39.752 [2024-11-06 15:41:57.434168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.752 [2024-11-06 15:41:57.434197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.434614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.434643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.435059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.435090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.435380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.435409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.435656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.435685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.436025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.436055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.436299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.436331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.436716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.436757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.437183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.437213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.437561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.437590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.437929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.437960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.438337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.438365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.438728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.438770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.439033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.439063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.439373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.439402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.439871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.439901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.440264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.440295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.440665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.440696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.440965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.441002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.441370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.441400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.441773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.441804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.442194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.442223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.442582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.442611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.442874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.442904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.443143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.443173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.443536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.443566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.443917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.443948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.444339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.444367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.444742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.444783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.445134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.445164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.445527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.445556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.445865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.445895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.446273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.446303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.446482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.446510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.446767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.446801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.447171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.447202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.753 [2024-11-06 15:41:57.447460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.753 [2024-11-06 15:41:57.447488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.753 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.447861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.447892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.448290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.448319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.448676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.448704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.449098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.449127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.449485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.449515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.449914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.449945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.450326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.450356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.450612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.450644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.451036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.451067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.451479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.451508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.451877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.451909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.452257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.452287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.452671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.452700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.452968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.452997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.453371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.453400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.453671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.453700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.453980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.454010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.454376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.454405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.454765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.454796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.455173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.455202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.455559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.455589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.455810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.455841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.456214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.456248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.456615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.456646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.457007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.457038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.457393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.457423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.457848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.457878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.458110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.458142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.458397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.458427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.458790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.458823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.459075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.459105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.459452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.459483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.459861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.459893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.460214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.460244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.460607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.460636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.461044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.754 [2024-11-06 15:41:57.461074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.754 qpair failed and we were unable to recover it. 00:29:39.754 [2024-11-06 15:41:57.461476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.461506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.461843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.461873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.462254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.462282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.462651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.462680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.463055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.463085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.463438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.463468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.463783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.463816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.464228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.464257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.464534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.464562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.464814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.464845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.465245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.465273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.465669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.465697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.466059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.466090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.466469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.466505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.466793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.466824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.467218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.467246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.467595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.467624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.468070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.468101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.468369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.468398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.468760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.468790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.469076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.469104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.469446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.469475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.469844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.469874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.470238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.470267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.470675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.470705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.471064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.471094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.471451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.471480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.471849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.471881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.472238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.472266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.472651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.472680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.473073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.473104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.473438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.473467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.473756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.473787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.474232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.474260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.474623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.474652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.755 [2024-11-06 15:41:57.475015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.755 [2024-11-06 15:41:57.475046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.755 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.475298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.475327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.475679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.475708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.476077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.476107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.476450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.476480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.476837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.476873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.477108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.477140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.477378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.477410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.477806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.477836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.478214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.478242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.478483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.478511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.478828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.478859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.479237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.479266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.479627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.479657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.480035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.480066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.480438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.480465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.480784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.480813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.481204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.481232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.481485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.481514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.481858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.481887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.482253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.482281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.482637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.482664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.483046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.483074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.483438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.483465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.483827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.483858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.484289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.484318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.484679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.484708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.484922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.484956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.485195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.485224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.485571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.485601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.485854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.485887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.486335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.486365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.486732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.486774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.487070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.487100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.487489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.487519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.487864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.487896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.756 [2024-11-06 15:41:57.488273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.756 [2024-11-06 15:41:57.488303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.756 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.488657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.488689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.489058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.489091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.489454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.489485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.489875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.489908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.490271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.490301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.490461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.490491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.490732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.490775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.491134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.491165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.491330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.491361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.491738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.491792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.492145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.492175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.492546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.492575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.492825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.492856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.493246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.493275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.493625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.493654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.493966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.493997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.494355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.494384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.494832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.494863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.495209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.495240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.495620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.495649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.495906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.495936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.496321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.496350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.496716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.496758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.497181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.497211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.497580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.497609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.497863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.497895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.498184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.498212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.498584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.498614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.498890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.498920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.499158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.499187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.757 qpair failed and we were unable to recover it. 00:29:39.757 [2024-11-06 15:41:57.499358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.757 [2024-11-06 15:41:57.499388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.499789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.499820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.500225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.500255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.500609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.500637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.501015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.501045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.501418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.501448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.501815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.501853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.502222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.502251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.502620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.502650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.502918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.502949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.503218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.503248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.503650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.503680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.504096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.504126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.504482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.504512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.504874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.504904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.505239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.505269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.505507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.505536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.505988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.506019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.506431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.506460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.506829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.506860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.507121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.507151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.507276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.507307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 Read completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Read completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Read completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Read completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Read completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Read completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Read completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Read completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Read completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Write completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Read completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Write completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Write completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Read completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Read completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Read completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Read completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Write completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Write completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Write completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Read completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Read completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Write completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Read completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Write completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Write completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Write completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Write completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Write completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Read completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Write completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 Read completed with error (sct=0, sc=8) 00:29:39.758 starting I/O failed 00:29:39.758 [2024-11-06 15:41:57.508131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.758 [2024-11-06 15:41:57.508589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.508651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.509157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.758 [2024-11-06 15:41:57.509258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.758 qpair failed and we were unable to recover it. 00:29:39.758 [2024-11-06 15:41:57.509682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.509720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.510196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.510299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.510728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.510794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.511051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.511082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.511322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.511352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.511702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.511731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.512241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.512271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.512646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.512676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.513030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.513061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.513316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.513345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.513563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.513593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.513853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.513884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.514256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.514285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.514662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.514692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.514967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.514998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.515247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.515280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.515626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.515656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.515894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.515924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.516291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.516321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.516668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.516697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.516996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.517027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.517393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.517426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.517592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.517621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.517915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.517945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.518294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.518324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.518677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.518706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.519092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.519123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.519474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.519503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.519922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.519954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.520334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.520365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.520547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.520575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.520892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.520922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.521197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.521228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.521602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.521633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.522014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.522045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.522277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.522308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.522529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.522560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.759 [2024-11-06 15:41:57.522952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.759 [2024-11-06 15:41:57.522982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.759 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.523381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.523412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.523775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.523805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.524202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.524232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.524597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.524626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.524881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.524918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.525295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.525324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.525701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.525729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.526172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.526202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.526556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.526595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.526757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.526791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.527187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.527216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.527574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.527603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.528083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.528114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.528463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.528493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.528832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.528862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.529242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.529271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.529615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.529643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.529962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.529992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.530334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.530364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.530614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.530643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.531016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.531045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.531434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.531463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.531829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.531860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.532270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.532299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.532659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.532688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.533004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.533035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.533408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.533437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.533891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.533921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.534269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.534298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.534643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.534672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.534979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.535008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.535392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.760 [2024-11-06 15:41:57.535422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.760 qpair failed and we were unable to recover it. 00:29:39.760 [2024-11-06 15:41:57.535673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.535703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.536079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.536110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.536477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.536506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.536772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.536802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.537032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.537064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.537342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.537371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.537728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.537765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.538045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.538074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.538448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.538476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.538770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.538799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.539195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.539223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.539569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.539599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.539860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.539897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.540286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.540315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.540673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.540704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.541225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.541256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.541613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.541643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.542046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.542078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.542445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.542474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.542725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.542767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.543016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.543048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.543396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.543424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.543876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.543906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.544322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.544351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.544575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.544604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.544969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.544999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.545247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.545279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.545678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.545707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.546071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.546102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.546460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.546489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.546867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.546897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.547264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.547295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.547660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.547689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.547979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.548009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.548389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.548417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.761 [2024-11-06 15:41:57.548781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.761 [2024-11-06 15:41:57.548811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.761 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.549194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.549222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.549579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.549609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.549997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.550026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.550392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.550421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.550778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.550809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.551057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.551089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.551328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.551360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.551724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.551761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.552158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.552188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.552570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.552598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.552986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.553016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.553256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.553287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.553635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.553665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.554052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.554083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.554435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.554465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.554837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.554868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.555101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.555139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.555503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.555533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.555879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.555910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.556173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.556202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.556615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.556646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.556991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.557021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.557382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.557411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.557674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.557703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.557953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.557983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.558339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.558369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.558736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.762 [2024-11-06 15:41:57.558777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.762 qpair failed and we were unable to recover it. 00:29:39.762 [2024-11-06 15:41:57.559140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.559170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.559532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.559561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.559918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.559948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.560316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.560345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.560703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.560732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.561113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.561144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.561504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.561533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.561907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.561936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.562316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.562344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.562697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.562727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.562996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.563029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.563409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.563439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.563803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.563834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.564204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.564233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.564493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.564522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.564866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.564895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.565274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.565303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.565656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.565687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.566042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.566073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.566405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.566435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.566722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.566769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.567112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.567140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.567504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.567535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.567897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.567928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.568303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.568332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.568694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.568723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.569101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.569130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.569373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.569404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.569830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.569860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.570232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.570268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.570629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.570658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.571079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.571109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.571345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.571374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.571799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.571829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.572109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.572138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.763 qpair failed and we were unable to recover it. 00:29:39.763 [2024-11-06 15:41:57.572517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.763 [2024-11-06 15:41:57.572547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.572894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.572924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.573278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.573307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.573761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.573791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.574186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.574215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.574581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.574612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.574982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.575012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.575376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.575405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.575638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.575670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.576041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.576073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.576441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.576470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.576831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.576861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.577112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.577141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.577386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.577418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.577790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.577820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.578047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.578078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.578442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.578471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.578832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.578863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.579212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.579241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.579608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.579638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.579994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.580024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.580392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.580423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.580664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.580692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.581073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.581104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.581467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.581497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.581872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.581903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.582272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.582301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.582656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.582686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.583042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.583072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.583418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.583447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.583802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.583833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.584201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.584231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.584594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.584623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.584974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.585014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.585383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.585418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.585784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.764 [2024-11-06 15:41:57.585818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.764 qpair failed and we were unable to recover it. 00:29:39.764 [2024-11-06 15:41:57.586200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.586229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.586601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.586630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.586997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.587026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.587397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.587426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.587788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.587819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.588219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.588247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.588605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.588636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.589060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.589089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.589447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.589477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.589844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.589873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.590245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.590274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.590636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.590664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.591035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.591066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.591440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.591470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.591830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.591860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.592112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.592143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.592511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.592541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.592890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.592923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.593265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.593295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.593559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.593588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.593824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.593859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.594245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.594276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.594634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.594662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.595025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.595058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.595422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.595452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.595814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.595845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.596288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.596317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.596762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.596793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.597210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.597239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.597601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.597631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.597876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.597907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.598269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.598298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.598554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.598583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.598937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.598968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.599334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.599364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.765 qpair failed and we were unable to recover it. 00:29:39.765 [2024-11-06 15:41:57.599617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.765 [2024-11-06 15:41:57.599648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.599979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.600010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.600356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.600395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.600728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.600772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.601127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.601159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.601518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.601547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.601884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.601916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.602279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.602308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.602715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.602766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.603133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.603163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.603403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.603431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.603683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.603713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.604098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.604130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.604506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.604536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.604787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.604821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.605198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.605230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.605578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.605607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.605871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.605902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.606269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.606301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.606637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.606667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.607025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.607056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.607405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.607436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.607812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.607842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.608268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.608297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.608651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.608682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.609035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.609067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.609450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.609480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.609726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.609767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.610156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.610186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.610543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.610573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.610946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.610979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.611358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.611389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.611825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.611856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.612240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.612270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.612637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.612666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.612941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.612973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.613216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.613245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.613607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.766 [2024-11-06 15:41:57.613638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.766 qpair failed and we were unable to recover it. 00:29:39.766 [2024-11-06 15:41:57.613994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.614024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.614295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.614324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.614687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.614721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.615095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.615126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.615359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.615389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.615744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.615792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.616047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.616079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.616465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.616496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.616852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.616883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.617218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.617248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.617510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.617540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.617934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.617967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.618335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.618364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.618800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.618833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.619252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.619282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.619698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.619728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.620090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.620127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.620463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.620494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.620857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.620889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.621254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.621284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.621649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.621678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.622041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.622070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.622438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.622467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.622834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.622866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.623219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.623248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.623610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.623643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.623980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.624010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.624441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.624471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.624853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.624884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.625095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.625124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.625370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.625398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.625625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.767 [2024-11-06 15:41:57.625656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.767 qpair failed and we were unable to recover it. 00:29:39.767 [2024-11-06 15:41:57.625924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.625954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.626212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.626241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.626632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.626663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.627011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.627043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.627382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.627411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.627781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.627813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.628180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.628209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.628421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.628450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.628813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.628845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.629205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.629235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.629597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.629626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.629917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.629946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.630205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.630234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.630570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.630605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.630840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.630870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.631224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.631254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.631620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.631650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.631994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.632024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.632395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.632425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.632678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.632711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.632995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.633027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.633406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.633435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.633794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.633825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.634285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.634314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.634676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.634704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.635087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.635117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.635471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.635500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.635775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.635806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.636188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.636217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.636576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.636605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.636984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.637014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.637391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.637420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.637784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.637815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.638200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.638229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.768 [2024-11-06 15:41:57.638585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.768 [2024-11-06 15:41:57.638614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.768 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.638975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.639006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.639372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.639401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.639767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.639797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.640045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.640074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.640320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.640351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.640752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.640783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.641206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.641236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.641599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.641629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.641986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.642017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.642254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.642285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.642647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.642676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.643007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.643038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.643399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.643428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.643792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.643822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.644202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.644231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.644636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.644664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.644981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.645010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.645386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.645415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.645784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.645813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.646211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.646240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.646675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.646705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.647121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.647152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.647513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.647543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.647907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.647936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.648197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.648226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.648586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.648614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.648970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.649000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.649361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.649390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.649768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.649798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.650158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.650188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.650436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.650465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.650834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.650864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.651231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.651260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.651606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.651635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.651979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.769 [2024-11-06 15:41:57.652010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.769 qpair failed and we were unable to recover it. 00:29:39.769 [2024-11-06 15:41:57.652257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.652290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.652659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.652688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.653057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.653087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.653458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.653487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.653855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.653886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.654322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.654351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.654629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.654662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.655023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.655054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.655414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.655445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.655703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.655732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.656114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.656150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.656507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.656536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.656896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.656927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.657280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.657309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.657677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.657705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.658071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.658101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.658477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.658506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.658874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.658904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.659266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.659295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.659637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.659665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.660029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.660059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.660307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.660337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.660694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.660723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.661082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.661112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.661471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.661499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.661866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.661896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.662230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.662260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.662619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.662648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.663065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.663096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.663432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.663461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.663843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.663873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.664230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.664260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.664615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.664644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.665027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.665057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.665297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.665329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.665677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.665706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.770 [2024-11-06 15:41:57.666074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-11-06 15:41:57.666104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.770 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.666437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.666467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.666831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.666862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.667198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.667229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.667595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.667623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.668013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.668044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.668405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.668435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.668773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.668803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.669156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.669186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.669550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.669579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.669831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.669861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.670222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.670251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.670627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.670658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.670919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.670949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.671314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.671355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.671706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.671734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.672104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.672134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.672501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.672529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.672904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.672934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.673310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.673341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.673696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.673725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.674071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.674100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.674333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.674363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.674715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.674766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.675036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.675066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.675411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.675440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.675692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.675725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.676082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.676113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.676476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.676507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.676758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.676792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.677196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.677225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.677479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.677508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.677879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.677909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.678254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.678284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.678649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.678678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.679039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-11-06 15:41:57.679069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.771 qpair failed and we were unable to recover it. 00:29:39.771 [2024-11-06 15:41:57.679432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.679461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.679830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.679860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.680241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.680270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.680617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.680648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.681078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.681108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.681481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.681511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.681884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.681915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.682279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.682308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.682706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.682734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.683092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.683123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.683491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.683520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.683773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.683806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.684107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.684136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.684389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.684417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.684772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.684802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.685169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.685198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.685445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.685473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.685827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.685857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.686230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.686266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.686634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.686663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.687026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.687055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.687426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.687455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.687819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.687849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.688233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.688262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.688617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.688645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.689020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.689051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.689414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.689444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.689681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.689710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.690068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.690100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.690429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.690457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.690819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.690850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.691221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.691249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.691596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.691625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.772 qpair failed and we were unable to recover it. 00:29:39.772 [2024-11-06 15:41:57.691888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-11-06 15:41:57.691918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.692279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.692308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.692708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.692737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.693103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.693132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.693497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.693527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.693902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.693931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.694164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.694195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.694591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.694620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.694959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.694996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.695359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.695389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.695642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.695670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.696019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.696049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.696408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.696438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.696813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.696842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.697226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.697255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.697604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.697632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.698075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.698105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.698467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.698496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.698855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.698885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.699246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.699275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.699642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.699671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.700021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.700051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.700411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.700440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.700822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.700851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.701226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.701255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.701594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.701628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.702067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.702097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.702343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.702372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.702731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.702783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.703014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.703047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.703412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.703442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.703822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.703852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.704277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.704305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.704657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.704687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.705065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.705095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.705444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.705473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.705824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.705854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.773 qpair failed and we were unable to recover it. 00:29:39.773 [2024-11-06 15:41:57.706216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.773 [2024-11-06 15:41:57.706244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.706606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.706635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.706890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.706920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.707288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.707316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.707695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.707724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.708097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.708128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.708500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.708528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.708895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.708925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.709291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.709320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.709738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.709775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.710061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.710090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.710542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.710571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.710931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.710960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.711188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.711220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.711632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.711661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.712010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.712040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.712408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.712437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.712800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.712830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.713224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.713253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.713618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.713647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.713902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.713931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.714272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.714301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.714659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.714688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.715056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.715087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.715450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.715480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.715857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.715886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.716247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.716276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.716635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.716664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.717092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.717128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.717465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.717502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.717840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.717870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.718237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.718265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.718627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.718656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.718902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.718932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.719275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.719305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.719639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.719667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:39.774 [2024-11-06 15:41:57.720013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.774 [2024-11-06 15:41:57.720044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:39.774 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.720405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.720437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.720805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.720837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.721193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.721222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.721606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.721635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.722004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.722033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.722391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.722419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.722775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.722805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.723153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.723183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.723440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.723469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.723719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.723760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.724130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.724160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.724370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.724401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.724799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.724831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.725214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.725243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.725603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.725632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.725909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.725939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.726248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.726278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.726641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.726669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.727042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.727072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.727428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.727466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.727836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.727866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.728231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.728260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.728636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.728664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.728997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.729029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.729377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.729406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.048 [2024-11-06 15:41:57.729772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.048 [2024-11-06 15:41:57.729803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.048 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.730211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.730240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.730568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.730596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.730975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.731005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.731440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.731469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.731828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.731857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.732215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.732251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.732583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.732611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.732997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.733028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.733384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.733413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.733779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.733809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.733987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.734019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.734388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.734418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.734780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.734811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.735238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.735269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.735501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.735529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.735898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.735930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.736308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.736337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.736706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.736733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.737051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.737082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.737459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.737488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.737854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.737886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.738257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.738286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.738462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.738490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.738871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.738901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.739153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.739184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.739556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.739586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.739931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.739960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.740322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.740351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.740585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.740617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.740932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.740961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.741323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.741352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.741758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.741789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.742180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.742209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.742466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.742494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.742810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.742841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.743203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.743233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.743404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.049 [2024-11-06 15:41:57.743435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-11-06 15:41:57.743809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.743839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.744198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.744227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.744601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.744630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.744976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.745006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.745373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.745401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.745777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.745806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.746181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.746210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.746582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.746612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.746972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.747016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.747367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.747397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.747757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.747786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.748138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.748168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.748506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.748535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.748893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.748922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.749188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.749217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.749454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.749486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.749818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.749847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.750025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.750055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.750300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.750329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.750697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.750728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.751064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.751094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.751400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.751429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.751789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.751820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.752205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.752234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.752598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.752626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.752975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.753005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.753362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.753391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.753764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.753795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.754153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.754181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.754417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.754448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.754817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.754847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.755218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.755249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.755606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.755635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.756012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.756043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.756412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.756442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.756778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.756808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.757154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.757183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.757520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.050 [2024-11-06 15:41:57.757548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-11-06 15:41:57.757909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.757940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.758172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.758203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.758561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.758590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.758962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.758992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.759359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.759388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.759766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.759796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.760146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.760175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.760543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.760572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.760944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.760974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.761339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.761368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.761722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.761765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.762116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.762145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.762511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.762540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.762907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.762938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.763302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.763332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.763702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.763730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.764092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.764121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.764487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.764515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.764846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.764876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.765240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.765269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.765628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.765657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.766011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.766041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.766395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.766424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.766803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.766832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.767223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.767253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.767618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.767647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.768022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.768052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.768436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.768465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.768824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.768854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.769212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.769240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.769608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.769637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.769869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.769901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.770279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.770309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.770558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.770587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.770949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.770979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.771339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.771370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.771621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.051 [2024-11-06 15:41:57.771651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-11-06 15:41:57.772029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.772060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.772425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.772455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.772813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.772843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.773213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.773243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.773599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.773627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.773849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.773878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.774219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.774247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.774610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.774638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.774981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.775019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.775353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.775382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.775634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.775661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.775897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.775929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.776289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.776318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.776702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.776736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.777092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.777121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.777495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.777524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.777862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.777893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.778232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.778262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.778619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.778648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.779013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.779043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.779417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.779446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.779807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.779837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.780213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.780242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.780599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.780629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.780983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.781012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.781377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.781406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.781758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.781790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.782179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.782209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.782458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.782487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.782838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.782870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.783240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.783269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-11-06 15:41:57.783629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.052 [2024-11-06 15:41:57.783657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.783898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.783932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.784300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.784328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.784683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.784711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.785094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.785125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.785487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.785516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.785872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.785901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.786275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.786303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.786767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.786798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.787121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.787151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.787511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.787540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.787795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.787825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.788172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.788202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.788564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.788593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.788968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.788998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.789360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.789389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.789616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.789645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.789960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.789989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.790271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.790299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.790648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.790678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.791045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.791076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.791428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.791457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.791822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.791859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.792211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.792241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.792599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.792628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.792978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.793009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.793380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.793410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.793842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.793872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.794238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.794267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.794630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.794659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.795100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.795131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.795481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.795510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.795904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.795935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.796293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.796322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.796572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.796604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.796872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.796901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.797270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.797299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.797665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.797694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.798051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.053 [2024-11-06 15:41:57.798081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-11-06 15:41:57.798433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.798462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.798702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.798735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.799103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.799134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.799512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.799541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.799903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.799934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.800309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.800339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.800707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.800735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.801098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.801128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.801476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.801505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.801876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.801905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.802138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.802171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.802412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.802441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.802877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.802908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.803276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.803305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.803601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.803630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.803991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.804022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.804406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.804435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.804795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.804825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.805206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.805235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.805597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.805624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.805995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.806025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.806393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.806421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.806782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.806818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.807224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.807259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.807685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.807714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.808089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.808120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.808525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.808554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.808909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.808940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.809309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.809338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.809716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.809752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.809980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.810012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.810284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.810314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.810661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.810690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.811062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.811093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.811457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.811486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.811870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.811900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.812326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.812355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.812685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.054 [2024-11-06 15:41:57.812714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-11-06 15:41:57.813107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.813137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.813520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.813549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.813885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.813916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.814327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.814356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.814713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.814741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.815097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.815126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.815486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.815515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.815934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.815964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.816324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.816354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.816732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.816777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.817147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.817176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.817428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.817456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.817814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.817845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.818251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.818281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.818722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.818758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.819121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.819151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.819506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.819536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.819795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.819825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.820205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.820235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.820599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.820627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.820871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.820901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.821245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.821275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.821690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.821719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.822147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.822177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.822434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.822463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.822820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.822863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.823205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.823236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.823602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.823631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.823984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.824015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.824417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.824447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.824829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.824859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.825236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.825266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.825644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.825672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.826019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.826049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.826414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.826444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.826809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.826839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.827205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.827234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.827595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.827624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.055 [2024-11-06 15:41:57.828006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.055 [2024-11-06 15:41:57.828036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.055 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.828399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.828428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.828735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.828781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.829079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.829108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.829458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.829488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.829849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.829879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.830246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.830275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.830635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.830663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.831025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.831055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.831414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.831444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.831808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.831839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.832127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.832156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.832432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.832461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.832850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.832881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.833233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.833262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.833625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.833653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.833914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.833945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.834197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.834225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.834649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.834678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.835009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.835040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.835395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.835424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.835785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.835814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.836175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.836203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.836561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.836590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.836946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.836977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.837227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.837260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.837613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.837643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.838020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.838050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.838411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.838441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.838804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.838835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.839197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.839226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.839602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.839631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.840071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.840102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.840468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.840498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.840902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.840932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.841312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.841341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.841702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.841731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.842109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.842138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.842412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.056 [2024-11-06 15:41:57.842441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.056 qpair failed and we were unable to recover it. 00:29:40.056 [2024-11-06 15:41:57.842798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.842828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.843213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.843242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.843493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.843523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.843893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.843924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.844351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.844380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.844714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.844758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.845183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.845213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.845575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.845604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.845987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.846017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.846358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.846388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.846763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.846795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.847027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.847058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.847433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.847463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.847823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.847854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.848257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.848288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.848516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.848555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.848904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.848935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.849287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.849315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.849691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.849720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.850089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.850119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.850492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.850521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.850912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.850943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.851294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.851326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.851690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.851721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.852072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.852102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.852451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.852481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.852843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.852876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.853229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.853259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.853503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.853534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.853950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.853981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.854326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.854356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.854718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.854757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.855125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.855155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.855508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.855537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.057 qpair failed and we were unable to recover it. 00:29:40.057 [2024-11-06 15:41:57.855914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-11-06 15:41:57.855944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.856308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.856339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.856699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.856729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.857106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.857138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.857496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.857525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.857888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.857920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.858283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.858312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.858689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.858720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.859094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.859124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.859494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.859525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.859945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.859975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.860321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.860350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.860641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.860671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.861004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.861034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.861297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.861326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.861675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.861705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.862126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.862159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.862514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.862546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.862918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.862949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.863301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.863332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.863716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.863753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.864140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.864175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.864539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.864568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.864921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.864952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.865320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.865349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.865587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.865616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.865861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.865892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.866262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.866292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.866536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.866566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.866962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.866994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.867357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.867387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.867660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.867689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.868066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.868096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.868324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.868355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.868732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.868772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.869196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.869227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.869563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.869593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.870032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.870063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.058 [2024-11-06 15:41:57.870404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-11-06 15:41:57.870440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.058 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.870799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.870830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.871207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.871237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.871599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.871628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.871979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.872009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.872382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.872411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.872655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.872686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.873085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.873116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.873477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.873507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.873864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.873894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.874142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.874172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.874331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.874364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.874794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.874827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.875058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.875087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.875454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.875484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.875846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.875879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.876116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.876148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.876387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.876419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.876802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.876832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.877199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.877231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.877594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.877625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.877997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.878027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.878378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.878409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.878788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.878826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.879187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.879216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.879559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.879588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.879989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.880021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.880256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.880286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.880692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.880723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.881093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.881124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.881566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.881596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.881956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.881988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.882233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.882262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.882611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.882642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.883006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.883037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.883293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.883324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.883707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.883736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.059 [2024-11-06 15:41:57.884120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-11-06 15:41:57.884151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.059 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.884380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.884410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.884786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.884819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.885218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.885249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.885600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.885630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.886002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.886033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.886404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.886435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.886806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.886837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.887191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.887223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.887612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.887642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.887987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.888019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.888377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.888407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.888791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.888822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.889059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.889088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.889432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.889461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.889860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.889892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.890127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.890160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.890537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.890567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.890815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.890847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.891204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.891234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.891611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.891641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.892003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.892033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.892482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.892513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.892780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.892812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.893163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.893194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.893562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.893593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.893968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.894004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.894344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.894382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.894717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.894758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.895085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.895114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.895467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.895497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.895859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.895892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.896243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.896273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.896622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.896652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.897014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.897046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.897406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.897437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.897802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-11-06 15:41:57.897836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.060 qpair failed and we were unable to recover it. 00:29:40.060 [2024-11-06 15:41:57.898277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.898307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.898539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.898571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.898939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.898970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.899319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.899358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.899734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.899776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.900035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.900065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.900437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.900469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.900829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.900862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.901215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.901244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.901616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.901645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.902094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.902124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.902373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.902405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.902761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.902792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.903152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.903181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.903542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.903571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.903806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.903836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.904214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.904243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.904603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.904633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.904983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.905014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.905364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.905393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.905766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.905796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.906168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.906197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.906548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.906577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.906919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.906951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.907327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.907357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.907713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.907743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.908112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.908141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.908550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.908579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.908940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.908971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.909334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.909369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.909576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.909608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.909854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.909886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.910017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.910048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.910437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.910466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.910727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.910764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.911096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.911125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.911513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.061 [2024-11-06 15:41:57.911543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.061 qpair failed and we were unable to recover it. 00:29:40.061 [2024-11-06 15:41:57.911891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.911921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.912292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.912321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.912678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.912707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.913076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.913106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.913452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.913482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.913848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.913879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.914249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.914280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.914652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.914681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.915049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.915080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.915433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.915463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.915826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.915855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.916258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.916288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.916517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.916546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.916912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.916942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.917312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.917341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.917710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.917739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.918116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.918145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.918395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.918426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.918791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.918822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.919208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.919237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.919599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.919627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.919973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.920003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.920361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.920391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.920776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.920806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.921162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.921191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.921545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.921574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.921832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.921862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.922219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.922248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.922624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.922653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.923037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.923067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.923436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.923465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.923828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.923860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.924214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.924250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.924591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.924621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.924983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.062 [2024-11-06 15:41:57.925015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.062 qpair failed and we were unable to recover it. 00:29:40.062 [2024-11-06 15:41:57.925350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.925380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.925744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.925783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.926182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.926212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.926568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.926597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.926959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.926990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.927352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.927381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.927596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.927627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.927884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.927913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.928286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.928316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.928575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.928603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.928975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.929005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.929371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.929402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.929764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.929794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.930156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.930185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.930565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.930594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.930952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.930982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.931349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.931377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.931768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.931798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.932164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.932194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.932529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.932558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.932924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.932954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.933320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.933349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.933729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.933766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.934110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.934141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.934351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.934384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.934762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.934793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.935225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.935254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.935586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.935615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.935872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.935902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.936264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.936293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.936634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.936663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.937080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.937110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.937468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.937498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.937859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.937888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.938256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.938285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.938638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.938666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.939044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.939074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.939476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.939511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.063 [2024-11-06 15:41:57.939872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.063 [2024-11-06 15:41:57.939903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.063 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.940221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.940250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.940617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.940646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.941011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.941041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.941370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.941399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.941633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.941665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.942027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.942058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.942423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.942453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.942887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.942917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.943223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.943253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.943618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.943647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.943998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.944030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.944397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.944426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.944779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.944809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.945250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.945280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.945641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.945670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.946059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.946090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.946422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.946452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.946813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.946843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.947193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.947223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.947591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.947620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.947889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.947919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.948282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.948312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.948669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.948699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.949185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.949216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.949585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.949615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.949984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.950014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.950239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.950270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.950505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.950533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.950893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.950923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.951286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.951316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.951678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.951706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.952119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.952149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.952503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.952531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.952905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.952936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.953304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.953334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.953704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.953732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.954102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.954132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.954496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.064 [2024-11-06 15:41:57.954525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.064 qpair failed and we were unable to recover it. 00:29:40.064 [2024-11-06 15:41:57.954974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.955010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.955409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.955438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.955802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.955831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.956190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.956220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.956452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.956483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.956855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.956887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.957140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.957170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.957401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.957433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.957689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.957718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.958154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.958184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.958546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.958575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.958954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.958984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.959210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.959240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.959551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.959581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.959928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.959960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.960213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.960243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.960599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.960628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.960970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.961001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.961357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.961386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.961757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.961787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.962146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.962175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.962539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.962568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.962821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.962851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.963103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.963132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.963361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.963392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.963725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.963762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.964124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.964153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.964532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.964562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.964915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.964955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.965364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.965393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.965724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.965762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.966105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.966134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.966514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.966542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.966887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.966916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.967258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.967288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.967654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.967683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.968078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.968110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.065 qpair failed and we were unable to recover it. 00:29:40.065 [2024-11-06 15:41:57.968479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.065 [2024-11-06 15:41:57.968509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.968763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.968795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.969161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.969190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.969551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.969588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.969952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.969982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.970233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.970263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.970609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.970639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.971042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.971072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.971440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.971469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.971716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.971755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.972176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.972207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.972547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.972575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.972845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.972874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.973328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.973357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.973717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.973756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.973885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.973915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.974256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.974286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.974625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.974655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.974987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.975019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.975380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.975410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.975765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.975794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.976168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.976197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.976568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.976597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.976976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.977006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.977386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.977415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.977807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.977837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.978198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.978226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.978602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.978632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.979000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.979030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.979385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.979414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.979671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.979702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.980046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.066 [2024-11-06 15:41:57.980077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.066 qpair failed and we were unable to recover it. 00:29:40.066 [2024-11-06 15:41:57.980446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.980476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.980732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.980770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.981130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.981159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.981550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.981579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.981949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.981979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.982339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.982368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.982598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.982630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.983004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.983035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.983402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.983431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.983835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.983866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.984228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.984257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.984625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.984660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.985025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.985057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.985414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.985443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.985813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.985843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.986190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.986219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.986578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.986607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.986876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.986905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.987307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.987336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.987692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.987721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.988091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.988120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.988489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.988519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.988698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.988730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.989132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.989162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.989607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.989637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.989993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.990024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.990378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.990408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.990767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.990798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.991032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.991064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.991457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.991486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.991823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.991853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.992228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.992258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.992623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.992653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.993016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.993046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.993422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.993451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.993711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.993739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.994132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.994162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.067 [2024-11-06 15:41:57.994510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.067 [2024-11-06 15:41:57.994540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.067 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:57.994895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:57.994925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:57.995263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:57.995293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:57.995651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:57.995680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:57.996043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:57.996074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:57.996316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:57.996349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:57.996707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:57.996736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:57.997105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:57.997135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:57.997502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:57.997530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:57.997894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:57.997924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:57.998275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:57.998304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:57.998552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:57.998580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:57.998812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:57.998844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:57.999207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:57.999236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:57.999589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:57.999625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:57.999978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.000009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.000442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.000472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.000825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.000856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.001208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.001238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.001613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.001642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.001982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.002013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.002372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.002402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.002642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.002670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.003087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.003117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.003473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.003502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.003714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.003742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.004015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.004047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.004398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.004428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.004819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.004850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.005215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.005244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.005621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.005650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.006031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.006062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.006502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.006531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.006770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.006803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.007082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.007111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.007469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.007499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.007715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.068 [2024-11-06 15:41:58.007754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.068 qpair failed and we were unable to recover it. 00:29:40.068 [2024-11-06 15:41:58.008162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.008191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.008550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.008580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.008836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.008867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.009218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.009247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.009606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.009636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.009978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.010009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.010383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.010412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.010777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.010807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.011167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.011196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.011555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.011584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.011929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.011960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.012200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.012229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.012574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.012604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.012817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.012851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.013238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.013268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.013505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.013533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.013794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.013823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.014199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.014236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.014578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.014608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.014981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.015011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.015389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.015419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.015779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.015809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.016173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.016202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.016564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.016592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.016979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.017010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.069 [2024-11-06 15:41:58.017379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.069 [2024-11-06 15:41:58.017408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.069 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.017776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.017809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.018174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.018206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.018576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.018605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.018940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.018970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.019336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.019366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.019728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.019766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.020148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.020177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.020543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.020574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.020939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.020969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.021321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.021351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.021713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.021742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.022119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.022147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.022533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.022563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.022914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.022944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.023311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.023341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.023707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.023736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.024160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.024190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.024547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.024577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.024961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.024993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.025358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.025388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.025752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.025783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.026140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.026171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.342 [2024-11-06 15:41:58.026412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.342 [2024-11-06 15:41:58.026443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.342 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.026695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.026725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.027105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.027134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.027497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.027527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.027906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.027936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.028288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.028318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.028681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.028710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.029067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.029097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.029458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.029486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.029834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.029865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.030244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.030274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.030628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.030658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.030899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.030929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.031293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.031324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.031700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.031729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.032090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.032120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.032482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.032510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.032887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.032916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.033146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.033177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.033465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.033495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.033799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.033828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.034033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.034063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.034306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.034336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.034716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.034753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.035113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.035142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.035501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.035532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.035906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.035937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.036301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.036330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.036699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.036727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.037101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.037130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.037503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.037532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.037887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.037918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.038278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.038307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.038657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.038687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.039065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.039103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.039471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.039500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.039831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.039867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.040238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.040267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.040629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.343 [2024-11-06 15:41:58.040657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.343 qpair failed and we were unable to recover it. 00:29:40.343 [2024-11-06 15:41:58.041022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.041052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.041403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.041433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.041642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.041674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.041904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.041938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.042298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.042326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.042694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.042723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.043112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.043141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.043499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.043527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.043888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.043919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.044329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.044359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.044721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.044759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.045129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.045158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.045535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.045564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.045926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.045956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.046320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.046350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.046716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.046753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.047112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.047140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.047490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.047519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.047883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.047913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.048270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.048299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.048674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.048703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.049080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.049111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.049461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.049498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.049818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.049849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.050242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.050271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.050705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.050735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.051101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.051133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.051466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.051495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.051859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.051889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.052250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.052279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.052644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.052672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.053052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.053081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.053448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.053478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.053839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.053869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.054232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.054262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.054628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.054658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.054925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.344 [2024-11-06 15:41:58.054954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.344 qpair failed and we were unable to recover it. 00:29:40.344 [2024-11-06 15:41:58.055188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.055223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.055455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.055484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.055907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.055936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.056303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.056332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.056708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.056738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.057151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.057180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.057430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.057462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.057816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.057847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.058137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.058165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.058531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.058561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.058910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.058940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.059308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.059337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.059701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.059731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.060128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.060157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.060541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.060570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.060930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.060961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.061318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.061346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.061763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.061793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.062161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.062190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.062549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.062578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.062928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.062957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.063317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.063346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.063677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.063707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.064094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.064124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.064467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.064496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.064833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.064863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.065127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.065158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.065509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.065538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.065899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.065929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.066292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.066321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.066577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.066606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.066995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.067026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.067367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.067397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.067765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.067795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.068195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.068225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.068570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.068599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.068973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.069004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.069365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.069394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.345 qpair failed and we were unable to recover it. 00:29:40.345 [2024-11-06 15:41:58.069764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.345 [2024-11-06 15:41:58.069793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.070028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.070061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.070450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.070485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.070824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.070856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.071209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.071239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.071596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.071625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.071999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.072028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.072400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.072430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.072680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.072710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.073085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.073115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.073348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.073380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.073724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.073761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.074113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.074143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.074496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.074524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.074892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.074923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.075275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.075305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.075711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.075741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.076187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.076217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.076569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.076599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.076842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.076875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.077224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.077255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.077612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.077641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.077983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.078014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.078256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.078285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.078644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.078674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.079038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.079068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.079432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.079460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.079718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.079755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.080160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.080189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.080548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.080577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.080928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.080957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.081334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.081363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.081723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.346 [2024-11-06 15:41:58.081759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.346 qpair failed and we were unable to recover it. 00:29:40.346 [2024-11-06 15:41:58.082115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.082144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.082511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.082541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.082907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.082938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.083296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.083324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.083672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.083700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.083966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.083998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.084270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.084298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.084694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.084722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.085122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.085151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.085513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.085548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.085810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.085839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.086231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.086260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.086616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.086646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.087014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.087045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.087416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.087445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.087808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.087837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.088178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.088207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.088462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.088492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.088852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.088882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.089244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.089273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.089514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.089545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.089896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.089926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.090286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.090316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.090680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.090710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.091067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.091097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.091452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.091482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.091822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.091853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.092203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.092231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.092601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.092629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.092973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.093004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.093356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.093387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.093740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.093779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.094058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.094089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.094451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.094480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.094726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.094765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.095166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.095196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.095548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.095579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.095889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.347 [2024-11-06 15:41:58.095921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.347 qpair failed and we were unable to recover it. 00:29:40.347 [2024-11-06 15:41:58.096280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.096312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.096664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.096694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.097118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.097152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.097508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.097538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.097817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.097849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.098259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.098290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.098649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.098679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.099020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.099052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.099411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.099442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.099688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.099719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.100136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.100167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.100536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.100572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.100811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.100843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.101096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.101126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.101497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.101526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.101802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.101833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.102061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.102095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.102480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.102510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.102876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.102907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.103282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.103314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.103636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.103666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.103935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.103966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.104312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.104343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.104711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.104740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.105203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.105234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.105493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.105525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.105995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.106027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.106381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.106413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.106770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.106802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.107077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.107109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.107470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.107500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.107830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.107861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.108219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.108249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.108621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.108652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.108992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.109024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.109270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.109304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.109595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.109625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.109990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.110022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.348 [2024-11-06 15:41:58.110255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.348 [2024-11-06 15:41:58.110287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.348 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.110650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.110680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.111068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.111099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.111493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.111524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.111873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.111905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.112262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.112294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.112417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.112449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.112685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.112717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.113086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.113118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.113478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.113510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.113887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.113919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.114254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.114285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.114617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.114647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.115060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.115098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.115432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.115461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.115853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.115884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.116112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.116145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.116337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.116368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.116772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.116803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.117203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.117233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.117602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.117631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.117991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.118023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.118389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.118418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.118778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.118810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.119056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.119089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.119458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.119487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.119854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.119884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.120262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.120292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.120656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.120685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.121060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.121093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.121325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.121356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.121726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.121777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.122181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.122212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.122466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.122495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.122884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.122915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.123276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.123307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.123698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.123729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.124122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.124152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.124520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.349 [2024-11-06 15:41:58.124550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.349 qpair failed and we were unable to recover it. 00:29:40.349 [2024-11-06 15:41:58.124909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.124941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.125316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.125345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.125719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.125757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.126142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.126173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.126466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.126496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.126778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.126809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.127048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.127077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.127302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.127331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.127700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.127731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.128166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.128195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.128554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.128585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.128925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.128957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.129313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.129344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.129591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.129620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.129984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.130027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.130245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.130278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.130633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.130663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.130990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.131022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.131286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.131315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.131661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.131691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.132058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.132090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.132453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.132483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.132856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.132887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.133239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.133272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.133620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.133648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.133992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.134022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.134253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.134286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.134639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.134668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.135027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.135057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.135328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.135357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.135736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.135774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.136149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.136179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.136540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.136568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.136969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.137001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.137240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.137272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.137675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.137704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.138066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.138098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.138474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.138505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.350 qpair failed and we were unable to recover it. 00:29:40.350 [2024-11-06 15:41:58.138945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.350 [2024-11-06 15:41:58.138977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.139316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.139356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.139692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.139722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.140126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.140157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.140554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.140584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.140976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.141007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.141347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.141378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.141708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.141738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.142183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.142212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.142570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.142602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.142972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.143002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.143368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.143398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.143770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.143802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.144169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.144200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.144548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.144584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.144919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.144949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.145307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.145343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.145693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.145723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.145969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.146001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.146239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.146273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.146627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.146656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.147002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.147034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.147381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.147410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.147774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.147806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.148173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.148203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.148577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.148608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.148985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.149016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.149388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.149418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.149790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.149820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.150179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.150208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.150571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.150603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.150843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.150877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.151282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.151312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.351 qpair failed and we were unable to recover it. 00:29:40.351 [2024-11-06 15:41:58.151687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.351 [2024-11-06 15:41:58.151718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.152077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.152106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.152503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.152532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.152868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.152898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.153270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.153299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.153662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.153691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.154122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.154152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.154516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.154544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.154932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.154963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.155177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.155206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.155572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.155601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.155961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.155992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.156354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.156382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.156742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.156788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.157123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.157152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.157506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.157535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.157894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.157924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.158282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.158311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.158708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.158737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.159117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.159147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.159510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.159538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.159884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.159915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.160281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.160310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.160638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.160672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.161056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.161088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.161461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.161490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.161868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.161899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.162259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.162288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.162647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.162677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.163046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.163076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.163332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.163364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.163714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.163755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.164092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.164120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.164492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.164523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.164880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.164910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.165279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.165308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.165414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.165444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.165861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.165892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.352 [2024-11-06 15:41:58.166259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.352 [2024-11-06 15:41:58.166287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.352 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.166460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.166492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.166815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.166845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.167221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.167250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.167596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.167625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.168001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.168030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.168277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.168307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.168552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.168584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.168929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.168961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.169404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.169433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.169804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.169835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.170238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.170267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.170609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.170640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.171006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.171036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.171261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.171291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.171681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.171711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.172072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.172102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.172443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.172471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.172830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.172861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.173245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.173283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.173624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.173653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.174041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.174071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.174429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.174458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.174861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.174891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.175252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.175281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.175731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.175776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.176149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.176179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.176561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.176590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.176932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.176963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.177326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.177354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.177713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.177742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.178110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.178140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.178499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.178527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.178886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.178917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.179287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.179316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.179674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.179704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.180073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.180104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.180477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.180507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.180861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.353 [2024-11-06 15:41:58.180891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.353 qpair failed and we were unable to recover it. 00:29:40.353 [2024-11-06 15:41:58.181256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.181286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.181637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.181666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.182046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.182076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.182318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.182350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.182744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.182782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.183148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.183177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.183538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.183566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.183952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.183982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.184239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.184267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.184621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.184650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.185049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.185080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.185465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.185494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.185853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.185884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.186262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.186293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.186660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.186690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.187091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.187122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.187479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.187507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.187867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.187898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.188246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.188276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.188637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.188665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.189047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.189079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.189439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.189466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.189829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.189859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.190262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.190292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.190542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.190573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.190923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.190953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.191313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.191348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.191780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.191810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.192177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.192205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.192552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.192581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.193020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.193051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.193392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.193422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.193789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.193820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.194184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.194212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.194588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.194618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.194975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.195005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.195251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.195279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.195622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.195652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.354 [2024-11-06 15:41:58.196005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.354 [2024-11-06 15:41:58.196036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.354 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.196399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.196428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.196798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.196830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.197250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.197281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.197644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.197674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.197971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.198001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.198366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.198396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.198766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.198795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.199166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.199196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.199546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.199575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.199803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.199835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.200228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.200257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.200625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.200654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.201007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.201040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.201447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.201477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.201850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.201881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.202278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.202307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.202672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.202701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.202994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.203023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.203372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.203401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.203767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.203797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.204161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.204188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.204432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.204464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.204648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.204679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.205059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.205090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.205438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.205467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.205778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.205808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.206179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.206208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.206571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.206606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.207014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.207044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.207406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.207434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.207797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.207827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.208188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.208216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.208585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.208614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.208982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.209013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.209379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.355 [2024-11-06 15:41:58.209407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.355 qpair failed and we were unable to recover it. 00:29:40.355 [2024-11-06 15:41:58.209754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.209785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.210138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.210167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.210526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.210554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.210918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.210948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.211296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.211325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.211697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.211725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.212092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.212123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.212480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.212509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.212875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.212906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.213266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.213296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.213543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.213574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.213935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.213965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.214202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.214231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.214637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.214665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.215009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.215039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.215410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.215438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.215801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.215831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.216098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.216126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.216361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.216393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.216768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.216800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.217140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.217170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.217507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.217536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.217911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.217941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.218297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.218326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.218669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.218698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.219069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.219098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.219459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.219489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.219831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.219861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.220238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.220267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.220714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.220742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.221126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.221156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.221594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.221622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.221997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.222027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.222394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.222423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.222804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.222835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.223197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.223225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.223583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.356 [2024-11-06 15:41:58.223612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.356 qpair failed and we were unable to recover it. 00:29:40.356 [2024-11-06 15:41:58.223920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.223951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.224317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.224346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.224707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.224736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.225111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.225141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.225501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.225530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.225895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.225925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.226278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.226307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.226664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.226692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.227048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.227078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.227454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.227483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.227920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.227950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.228317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.228345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.228699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.228727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.229143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.229173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.229526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.229556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.229903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.229933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.230300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.230329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.230692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.230721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.231087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.231116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.231489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.231518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.231954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.231983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.232355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.232383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.232765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.232801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.233266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.233296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.233660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.233688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.234058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.234087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.234442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.234471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.234831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.234861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.235243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.235272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.235638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.235666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.236036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.236066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.236447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.236476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.236743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.236779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.237153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.237182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.237419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.237450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.237809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.237839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.238219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.238249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.238606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.238635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.357 [2024-11-06 15:41:58.238997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.357 [2024-11-06 15:41:58.239027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.357 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.239389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.239418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.239786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.239814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.240183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.240213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.240596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.240626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.240971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.241002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.241364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.241393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.241760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.241790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.242142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.242170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.242426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.242454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.242805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.242835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.243236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.243265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.243619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.243649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.244017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.244047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.244412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.244442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.244881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.244911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.245129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.245160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.245539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.245568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.245818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.245850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.246244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.246272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.246566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.246594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.246857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.246888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.247246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.247274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.247625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.247654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.248050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.248086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.248470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.248498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.248858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.248888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.249298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.249328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.249568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.249597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.249960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.249990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.250358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.250387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.250647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.250679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.250921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.250955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.251313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.251341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.251708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.251737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.252042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.252072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.252437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.252466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.252716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.252753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.253117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.358 [2024-11-06 15:41:58.253147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.358 qpair failed and we were unable to recover it. 00:29:40.358 [2024-11-06 15:41:58.253431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.253464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.253863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.253893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.254244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.254275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.254639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.254667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.255013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.255043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.255481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.255510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.255874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.255905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.256299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.256328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.256688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.256717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.257154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.257185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.257546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.257574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.257953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.257982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.258364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.258393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.258759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.258789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.259124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.259152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.259501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.259537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.259962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.259993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.260243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.260274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.260635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.260664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.261077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.261108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.261440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.261468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.261825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.261855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.262213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.262249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.262610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.262639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.262990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.263020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.263358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.263393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.263769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.263799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.264135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.264163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.264403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.264435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.264851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.264881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.265310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.265339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.265592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.265620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.265978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.266007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.266394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.266423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.266781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.266812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.267145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.267174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.267393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.267424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.267675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.359 [2024-11-06 15:41:58.267705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.359 qpair failed and we were unable to recover it. 00:29:40.359 [2024-11-06 15:41:58.267975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.268006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.268378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.268409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.268782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.268813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.269211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.269241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.269604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.269633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.269996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.270025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.270367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.270396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.270841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.270871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.271237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.271266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.271522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.271550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.271896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.271926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.272351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.272380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.272791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.272826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.273213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.273242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.273603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.273633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.274042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.274074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.274313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.274343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.274706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.274736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.275130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.275160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.275530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.275559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.275925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.275954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.276310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.276340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.276707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.276736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.277119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.277148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.277517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.277547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.277912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.277942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.278374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.278403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.278644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.278685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.279117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.279146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.279507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.279535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.279892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.279922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.280295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.280324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.280685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.280715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.281122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.360 [2024-11-06 15:41:58.281153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.360 qpair failed and we were unable to recover it. 00:29:40.360 [2024-11-06 15:41:58.281540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.281568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.281921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.281952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.282325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.282355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.282720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.282756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.283129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.283159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.283524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.283553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.283816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.283864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.284266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.284295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.284692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.284722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.285112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.285141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.285503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.285533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.285897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.285928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.286233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.286263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.286630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.286659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.286994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.287025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.287277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.287309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.287658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.287688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.288051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.288081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.288386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.288415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.288788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.288818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.289183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.289214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.289560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.289589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.289941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.289973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.290335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.290364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.290736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.290773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.291129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.291158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.291529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.291557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.291938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.291968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.292336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.292364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.292715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.292764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.293125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.293154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.293457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.293495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.293854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.293885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.294330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.294364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.294701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.294731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.295099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.295128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.295492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.295521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.295878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.295907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.361 qpair failed and we were unable to recover it. 00:29:40.361 [2024-11-06 15:41:58.296313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.361 [2024-11-06 15:41:58.296342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.296701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.296729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.297102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.297132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.297496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.297525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.297897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.297927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.298285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.298314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.298722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.298757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.299103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.299133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.299367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.299398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.299767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.299798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.300198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.300227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.300582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.300611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.300976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.301006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.301360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.301391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.301803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.301833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.302066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.302099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.302436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.302467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.302705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.302736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.303087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.303117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.303354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.303383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.303653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.303683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.303933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.303965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.304324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.304354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.304691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.304721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.305012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.305042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.305372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.305400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.305769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.305801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.306175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.306204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.306575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.306603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.306975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.307005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.307369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.307399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.307757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.307788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.308160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.308188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.308559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.308588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.308961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.308991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.309361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.309396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.309762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.309793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.310146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.310174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.310555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.362 [2024-11-06 15:41:58.310584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.362 qpair failed and we were unable to recover it. 00:29:40.362 [2024-11-06 15:41:58.310947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.363 [2024-11-06 15:41:58.310978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.363 qpair failed and we were unable to recover it. 00:29:40.363 [2024-11-06 15:41:58.311338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.363 [2024-11-06 15:41:58.311366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.363 qpair failed and we were unable to recover it. 00:29:40.363 [2024-11-06 15:41:58.311713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.363 [2024-11-06 15:41:58.311742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.363 qpair failed and we were unable to recover it. 00:29:40.636 [2024-11-06 15:41:58.312103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.636 [2024-11-06 15:41:58.312138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.636 qpair failed and we were unable to recover it. 00:29:40.636 [2024-11-06 15:41:58.312506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.636 [2024-11-06 15:41:58.312536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.636 qpair failed and we were unable to recover it. 00:29:40.636 [2024-11-06 15:41:58.312784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.636 [2024-11-06 15:41:58.312818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.636 qpair failed and we were unable to recover it. 00:29:40.636 [2024-11-06 15:41:58.313169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.636 [2024-11-06 15:41:58.313198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.636 qpair failed and we were unable to recover it. 00:29:40.636 [2024-11-06 15:41:58.313558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.636 [2024-11-06 15:41:58.313588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.636 qpair failed and we were unable to recover it. 00:29:40.636 [2024-11-06 15:41:58.314021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.636 [2024-11-06 15:41:58.314052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.636 qpair failed and we were unable to recover it. 00:29:40.636 [2024-11-06 15:41:58.314308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.314337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.314714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.314757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.315162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.315192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.315543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.315574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.315934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.315965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.316348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.316377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.316743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.316779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.317078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.317108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.317415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.317444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.317717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.317754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.318081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.318111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.318476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.318506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.318762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.318791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.319075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.319104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.319538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.319567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.319954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.319986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.320345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.320374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.320737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.320774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.321124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.321155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.321503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.321532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.321955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.321985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.322323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.322355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.322695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.322724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.323014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.323044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.323401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.323432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.323791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.323822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.324192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.324222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.324468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.324505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.324873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.324904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.325272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.325301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.325567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.325597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.325949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.325979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.326342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.326371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.326723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.326760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.327122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.327152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.327511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.327540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.327906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.327937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.328296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.328325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.637 qpair failed and we were unable to recover it. 00:29:40.637 [2024-11-06 15:41:58.328692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.637 [2024-11-06 15:41:58.328721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.329135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.329165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.329523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.329551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.329929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.329959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.330330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.330359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.330721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.330758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.331108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.331138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.331505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.331534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.332005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.332036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.332271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.332301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.332667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.332696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.333058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.333090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.333454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.333485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.333846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.333876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.334112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.334142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.334514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.334544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.334909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.334940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.335318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.335346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.335605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.335635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.335998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.336029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.336395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.336426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.336784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.336814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.337175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.337204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.337570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.337600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.337849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.337882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.338264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.338294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.338656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.338687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.338944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.338975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.339350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.339379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.339737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.339793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.340169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.340198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.340560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.340590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.340843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.340876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.341225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.341256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.341624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.341654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.342028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.342058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.342418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.342447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.342825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.342856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.638 [2024-11-06 15:41:58.343221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.638 [2024-11-06 15:41:58.343251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.638 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.343594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.343625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.343973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.344004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.344356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.344386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.344757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.344788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.345169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.345205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.345529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.345559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.345920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.345954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.346318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.346347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.346711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.346743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.347148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.347180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.347423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.347457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.347804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.347835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.348090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.348120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.348478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.348508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.348843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.348873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.349243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.349273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.349652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.349683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.350053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.350085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.350467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.350497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.350867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.350898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.351257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.351287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.351667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.351696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.351979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.352013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.352372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.352402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.352768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.352800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.353158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.353189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.353554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.353583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.353928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.353960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.354314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.354344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.354586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.354619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.354861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.354910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.355266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.355298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.355537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.355571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.355839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.355870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.356236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.356269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.356638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.356667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.357025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.357055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.639 [2024-11-06 15:41:58.357419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.639 [2024-11-06 15:41:58.357449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.639 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.357801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.357831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.358255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.358285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.358624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.358655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.359041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.359072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.359337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.359369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.361370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.361431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.361834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.361867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.362219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.362250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.362619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.362648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.363061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.363092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.363427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.363457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.363601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.363631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.363904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.363935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.364354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.364384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.364633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.364662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.364862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.364892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.365277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.365308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.365673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.365704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.366075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.366106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.366466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.366496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.366871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.366903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.367157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.367186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.367441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.367470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.367834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.367865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.368170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.368199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.368557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.368585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.368934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.368965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.369337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.369367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.369731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.369771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.369887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.369916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.370284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.370315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.370680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.370711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.371148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.371185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.371531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.371562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.371814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.371845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.372207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.372237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.372603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.372632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.372980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.640 [2024-11-06 15:41:58.373011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.640 qpair failed and we were unable to recover it. 00:29:40.640 [2024-11-06 15:41:58.373352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.373382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.373660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.373689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.373967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.374001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.374355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.374383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.374756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.374788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.375070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.375099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.375431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.375461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.375703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.375733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.377619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.377690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.378098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.378133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.378473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.378503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.378863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.378897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.379265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.379297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.382803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.382868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.383132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.383166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.383514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.383551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.383933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.383970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.384349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.384384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.384771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.384804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.385045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.385078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.385467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.385498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.386033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.386144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.386588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.386625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.387119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.387225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.387571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.387612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.387976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.388020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.388399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.388431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.388910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.388945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.389308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.389342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.389582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.389617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.389916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.389950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.390153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.390184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.641 [2024-11-06 15:41:58.390573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.641 [2024-11-06 15:41:58.390606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.641 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.390988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.391022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.391394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.391423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.391806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.391838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.392189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.392227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.392571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.392602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.392949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.392983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.393341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.393371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.393731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.393776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.394034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.394064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.394321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.394350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.394783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.394814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.395209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.395239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.395623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.395652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.395903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.395934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.396304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.396334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.396696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.396735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.397150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.397179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.397523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.397555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.397965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.397996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.398242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.398274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.398626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.398656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.398907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.398945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.399397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.399426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.399792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.399823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.400221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.400251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.400614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.400642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.400989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.401019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.401374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.401403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.401767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.401798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.402170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.402203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.402533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.402562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.402856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.402887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.403305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.403334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.403574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.403603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.403971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.404002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.404272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.404301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.404646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.404675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.405116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.405147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.642 [2024-11-06 15:41:58.405528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.642 [2024-11-06 15:41:58.405558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.642 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.405924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.405954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.406320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.406349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.406700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.406728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.406988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.407024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.407382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.407412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.407774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.407805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.408163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.408192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.408560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.408589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.408832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.408863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.409211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.409240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.409592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.409620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.409977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.410008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.410374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.410402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.410770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.410800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.411158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.411188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.411547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.411575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.411930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.411961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.412327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.412357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.412768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.412798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.413177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.413207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.413558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.413587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.413955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.413986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.414345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.414375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.414818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.414848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.415230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.415259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.415610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.415640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.415891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.415922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.416306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.416335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.416696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.416724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.417092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.417123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.417493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.417527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.417895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.417925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.418277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.418306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.418667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.418697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.419057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.419088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.419430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.419461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.419619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.419648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.419872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.419907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.643 [2024-11-06 15:41:58.420159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.643 [2024-11-06 15:41:58.420190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.643 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.420546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.420576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.420944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.420974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.421331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.421361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.421726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.421763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.422123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.422152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.422524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.422554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.422917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.422947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.423294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.423322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.423684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.423713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.424093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.424124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.424490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.424519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.424881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.424912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.425278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.425308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.425670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.425698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.426061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.426091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.426345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.426379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.426759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.426790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.427144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.427173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.427507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.427537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.427862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.427895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.428272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.428300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.428652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.428681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.429026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.429058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.429365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.429394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.429723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.429761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.429928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.429961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.430324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.430352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.430707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.430735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.431105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.431134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.431499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.431528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.431905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.431935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.432287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.432315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.432686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.432716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.433086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.433117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.433478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.433506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.433867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.433898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.434248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.434277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.434645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.434673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.644 [2024-11-06 15:41:58.435030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.644 [2024-11-06 15:41:58.435062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.644 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.435402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.435432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.435697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.435726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.436087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.436117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.436474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.436504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.436862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.436892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.437205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.437234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.437448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.437477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.437763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.437793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.438159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.438189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.438623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.438653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.438909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.438938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.439292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.439321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.439679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.439708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.440079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.440110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.440402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.440431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.440790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.440821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.441163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.441192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.441556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.441585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.441937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.441968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.442319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.442348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.442707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.442742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.443138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.443168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.443529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.443558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.443908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.443939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.444298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.444326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.444690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.444719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.445137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.445167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.445517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.445547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.445906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.445937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.446296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.446325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.446580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.446612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.446974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.447004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.447363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.447392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.447659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.645 [2024-11-06 15:41:58.447688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.645 qpair failed and we were unable to recover it. 00:29:40.645 [2024-11-06 15:41:58.448076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.448107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.448519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.448550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.448906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.448936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.449302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.449333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.449682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.449712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.450086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.450116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.450472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.450501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.450870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.450899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.451321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.451349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.451709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.451738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.452120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.452149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.452516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.452545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.452921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.452951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.453187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.453222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.453568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.453599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.453979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.454008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.454351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.454380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.454688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.454717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.454978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.455007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.455355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.455385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.455683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.455712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.456089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.456118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.456488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.456518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.456976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.457007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.457418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.457447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.457689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.457718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.458107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.458137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.458512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.458541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.458889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.458920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.459275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.459303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.459651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.459680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.460050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.460079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.460455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.460485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.460738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.460779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.461042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.461071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.461443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.461471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.461832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.461862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.462266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.462297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.646 [2024-11-06 15:41:58.462658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.646 [2024-11-06 15:41:58.462687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.646 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.463066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.463096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.463458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.463487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.463824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.463855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.464144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.464172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.464535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.464565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.464911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.464941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.465200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.465228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.465578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.465607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.465970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.466000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.466364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.466392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.466740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.466778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.467129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.467159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.467535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.467563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.467905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.467937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.468374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.468403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.468783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.468815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.469223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.469252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.469603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.469633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.469986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.470017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.470372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.470401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.470766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.470795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.471165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.471194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.471531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.471561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.471919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.471948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.472318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.472348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.472713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.472742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.473002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.473031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.473403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.473432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.473705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.473734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.474103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.474133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.474533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.474563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.474909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.474939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.475289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.475318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.475680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.475709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.476068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.476098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.476465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.476493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.476856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.476885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.477255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.477283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.647 [2024-11-06 15:41:58.477647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.647 [2024-11-06 15:41:58.477676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.647 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.478055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.478086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.478319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.478352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.478726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.478767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.479109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.479146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.479504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.479533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.479879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.479910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.480282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.480310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.480676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.480704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.480929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.480959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.481340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.481368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.481729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.481767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.482121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.482150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.482508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.482537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.482892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.482923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.483290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.483318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.483682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.483710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.484167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.484197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.484563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.484591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.484966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.484996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.485348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.485377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.485740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.485778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.486144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.486172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.486538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.486567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.486925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.486956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.487316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.487345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.487763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.487794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.488158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.488186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.488548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.488576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.488837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.488868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.489252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.489281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.489589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.489624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.489989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.490019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.490259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.490288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.490664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.490692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.491099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.491128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.491485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.491515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.491896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.491926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.492300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.492328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.648 qpair failed and we were unable to recover it. 00:29:40.648 [2024-11-06 15:41:58.492778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.648 [2024-11-06 15:41:58.492810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.493162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.493191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.493629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.493658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.494002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.494033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.494190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.494221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.494578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.494607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.494977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.495008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.495377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.495406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.495772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.495802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.496065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.496094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.496444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.496474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.496813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.496844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.497217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.497246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.497593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.497630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.497924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.497953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.498178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.498207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.498449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.498478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.498836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.498866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.499236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.499265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.499641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.499675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.500042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.500073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.500438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.500466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.500889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.500919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.501272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.501301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.501662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.501690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.502043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.502073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.502433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.502462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.502823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.502853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.503222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.503251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.503609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.503639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.503997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.504027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.504366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.504396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.504654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.504684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.505079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.505111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.505473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.505503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.505866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.505897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.506255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.506284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.506641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.506669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.507011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.507042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.649 qpair failed and we were unable to recover it. 00:29:40.649 [2024-11-06 15:41:58.507412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.649 [2024-11-06 15:41:58.507442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.507791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.507821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.508197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.508226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.508534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.508563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.508928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.508958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.509325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.509354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.509717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.509755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.510116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.510146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.510511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.510539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.510902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.510932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.511301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.511331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.511681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.511710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.511997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.512027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.512397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.512426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.512827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.512858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.513290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.513320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.513685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.513713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.514071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.514101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.514397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.514426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.514782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.514812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.515182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.515211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.515582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.515621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.515876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.515906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.516260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.516289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.516622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.516651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.517030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.517060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.517411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.517439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.517810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.517840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.518188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.518217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.518585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.518613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.518985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.519016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.519419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.519447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.519698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.519727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.520124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.520155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.650 [2024-11-06 15:41:58.520520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.650 [2024-11-06 15:41:58.520548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.650 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.520931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.520962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.521329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.521358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.521730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.521769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.522119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.522148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.522511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.522540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.522894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.522924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.523175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.523207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.523567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.523596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.523941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.523970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.524339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.524367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.524727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.524778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.525200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.525229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.525482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.525511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.525843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.525880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.526233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.526263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.526622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.526650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.526986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.527018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.527355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.527383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.527743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.527782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.528139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.528168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.528513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.528541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.528900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.528931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.529297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.529326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.529768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.529798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.530181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.530210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.530554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.530585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.530969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.530999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.531359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.531389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.531756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.531785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.532147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.532176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.532541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.532570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.532930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.532960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.533309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.533340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.533717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.533754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.534181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.534210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.534607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.534637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.535000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.535031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.535393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.535422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.651 qpair failed and we were unable to recover it. 00:29:40.651 [2024-11-06 15:41:58.535788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.651 [2024-11-06 15:41:58.535818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.536177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.536205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.536550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.536586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.536956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.536986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.537341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.537369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.537713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.537741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.538103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.538133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.538495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.538523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.538870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.538900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.539262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.539291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.539652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.539679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.540031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.540060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.540422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.540451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.540698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.540726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.541089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.541119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.541480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.541509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.541875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.541906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.542255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.542283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.542656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.542684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.542940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.542970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.543356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.543386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.543621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.543653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.544022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.544053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.544425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.544455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.544779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.544810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.545166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.545195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.545555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.545584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.545822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.545851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.546225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.546254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.546568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.546597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.546959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.546990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.547251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.547279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.547632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.547661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.548003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.548034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.548402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.548431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.548670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.548701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.549056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.549087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.549463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.549492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.549854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.549882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.652 qpair failed and we were unable to recover it. 00:29:40.652 [2024-11-06 15:41:58.550280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.652 [2024-11-06 15:41:58.550308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.550670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.550699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.551055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.551085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.551465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.551494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.551848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.551880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.552246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.552274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.552658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.552687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.553060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.553091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.553470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.553500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.553874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.553903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.554270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.554298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.554659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.554687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.555125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.555155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.555524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.555552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.555913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.555943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.556284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.556313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.556690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.556718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.557097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.557127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.557493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.557523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.557900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.557931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.558295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.558324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.558684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.558712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.559067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.559097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.559461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.559491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.559869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.559899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.560270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.560299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.560552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.560583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.561058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.561089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.561433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.561463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.561833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.561862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.562228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.562258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.562623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.562658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.562998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.563029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.563361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.563390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.563761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.563791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.564127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.564157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.564519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.564548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.564820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.564850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.565195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.653 [2024-11-06 15:41:58.565224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.653 qpair failed and we were unable to recover it. 00:29:40.653 [2024-11-06 15:41:58.565592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.565621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.566009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.566038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.566413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.566442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.566802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.566832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.567204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.567232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.567611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.567640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.567985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.568017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.568387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.568415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.568777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.568807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.569165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.569195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.569562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.569591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.569970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.570000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.570369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.570398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.570765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.570795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.571161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.571190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.571549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.571578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.571932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.571960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.572325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.572354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.572664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.572694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.573063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.573099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.573442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.573473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.573819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.573850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.574213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.574242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.574609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.574638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.575003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.575033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.575270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.575302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.575687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.575716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.576116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.576146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.576393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.576426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.576810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.576840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.577187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.577217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.577561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.577590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.577931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.577961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.578326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.578356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.578721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.654 [2024-11-06 15:41:58.578757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.654 qpair failed and we were unable to recover it. 00:29:40.654 [2024-11-06 15:41:58.579004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.579036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.579397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.579427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.579677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.579706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.580077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.580106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.580437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.580466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.580814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.580845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.581084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.581112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.581475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.581504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.581958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.581989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.582344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.582373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.582734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.582771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.583113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.583148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.583512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.583541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.583904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.583933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.584304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.584335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.584694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.584724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.585103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.585132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.585430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.585459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.585711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.585743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.586132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.586162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.586523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.586552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.586912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.586943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.587306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.587334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.587695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.587725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.588176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.588206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.588572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.588602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.589014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.589044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.589476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.589506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.589867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.589897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.590258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.590287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.590527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.590556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.590927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.590958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.591318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.591348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.591706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.591735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.592080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.592110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.592473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.592504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.592780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.592811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.593194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.593223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.655 [2024-11-06 15:41:58.593592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.655 [2024-11-06 15:41:58.593622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.655 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.593923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.593956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.594224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.594254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.594587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.594618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.595012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.595044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.595406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.595435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.595812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.595866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.596274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.596303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.596647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.596676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.597062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.597092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.597362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.597390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.597681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.597711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.598076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.598107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.598468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.598497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.598871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.598911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.599155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.599187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.599589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.599619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.600017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.600050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.600399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.600430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.600795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.600828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.601109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.601138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.601498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.601528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.601957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.601988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.602352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.602381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.602756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.602786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.603141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.603171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.603531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.603560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.603903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.603933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.604280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.604310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.604673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.604703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.605088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.605119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.605488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.605517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.605883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.605914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.656 [2024-11-06 15:41:58.606295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.656 [2024-11-06 15:41:58.606327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.656 qpair failed and we were unable to recover it. 00:29:40.929 [2024-11-06 15:41:58.606686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.606719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.606977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.607008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.607378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.607408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.607703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.607734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.608092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.608123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.608459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.608491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.608845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.608880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.609250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.609287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.609636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.609665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.610031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.610061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.610423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.610454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.610812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.610842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.611094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.611124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.611491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.611520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.611766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.611795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.612175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.612205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.612445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.612476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.612838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.612869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.613217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.613245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.613628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.613658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.614015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.614047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.614275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.614305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.614666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.614694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.615063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.615095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.615466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.615497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.615868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.615899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.616129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.616158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.616539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.616568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.616931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.616961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.617221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.617252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.617613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.617644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.618025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.618058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.618404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.618434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.618689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.618719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.618974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.619012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.619365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.619397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.619775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.619807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.930 [2024-11-06 15:41:58.620135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.930 [2024-11-06 15:41:58.620166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.930 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.620534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.620564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.620928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.620961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.621314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.621344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.621711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.621742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.622137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.622167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.622529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.622560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.622802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.622833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.623223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.623252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.623599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.623630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.623993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.624024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.624392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.624423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.624786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.624818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.625210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.625247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.625608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.625639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.625978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.626012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.626268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.626296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.626638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.626670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.627053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.627084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.627440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.627469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.627840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.627871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.628232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.628263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.628627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.628655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.629003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.629034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.629386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.629416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.629834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.629867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.630222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.630253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.630592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.630620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.631034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.631064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.631422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.631453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.631806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.631836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.632206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.632236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.632608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.632637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.632930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.632961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.633331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.633359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.633611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.633639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.633866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.633896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.634256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.634285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.634652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.634683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.931 qpair failed and we were unable to recover it. 00:29:40.931 [2024-11-06 15:41:58.635042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.931 [2024-11-06 15:41:58.635073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.635429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.635458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.635824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.635854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.636219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.636250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.636622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.636651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.636998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.637031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.637387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.637416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.637774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.637806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.638199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.638231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.638579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.638609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.638982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.639015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.639351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.639390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.639684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.639714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.640129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.640159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.640531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.640560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.640820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.640851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.641280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.641312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.641660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.641695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.642051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.642083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.642461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.642490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.642825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.642855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.643218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.643247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.643648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.643678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.644043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.644073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.644327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.644359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.644712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.644743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.645126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.645163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.645497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.645526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.645794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.645828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.646231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.646260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.646612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.646643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.647037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.647069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.647336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.647364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.647600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.647637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.647973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.648003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.648250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.648279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.932 qpair failed and we were unable to recover it. 00:29:40.932 [2024-11-06 15:41:58.648635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.932 [2024-11-06 15:41:58.648664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.649058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.649087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.649333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.649362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.649711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.649740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.649920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.649949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.650329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.650358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.650719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.650757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.650989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.651018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.651251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.651281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.651646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.651675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.652032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.652062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.652426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.652454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.652836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.652867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.653233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.653262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.653699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.653729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.654145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.654176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.654578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.654608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.655034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.655069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.655312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.655340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.655693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.655722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.656090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.656120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.656469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.656499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.656866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.656897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.657277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.657306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.657665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.657695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.657960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.657994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.658444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.658474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.658831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.658863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.659233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.659261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.659649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.659679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.660054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.660084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.660433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.660465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.660822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.660853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.661222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.661252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.933 qpair failed and we were unable to recover it. 00:29:40.933 [2024-11-06 15:41:58.661629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.933 [2024-11-06 15:41:58.661658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.662054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.662084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.662440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.662469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.662732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.662769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.662997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.663030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.663383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.663413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.663814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.663844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.664093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.664122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.664523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.664555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.664841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.664871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.665225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.665261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.665615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.665647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.666002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.666032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.666404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.666435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.666687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.666715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.667121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.667152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.667501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.667532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.667896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.667926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.668302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.668332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.668715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.668743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.669019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.669048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.669439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.669469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.669834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.669866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.670242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.670271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.670639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.670669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.671051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.671081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.671422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.671451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.671801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.671833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.672063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.672095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.672464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.934 [2024-11-06 15:41:58.672495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.934 qpair failed and we were unable to recover it. 00:29:40.934 [2024-11-06 15:41:58.672741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.672780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.673116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.673146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.673515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.673545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.673888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.673918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.674273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.674302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.674666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.674696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.675082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.675114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.675475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.675504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.675884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.675924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.676301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.676333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.676701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.676730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.677099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.677129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.677485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.677516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.677884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.677915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.678290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.678319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.678680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.678711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.679140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.679170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.679510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.679540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.679788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.679822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.680136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.680165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.680454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.680485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.680825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.680869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.681198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.681228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.681591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.681621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.681996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.682026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.682389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.682419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.682782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.682812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.683178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.683208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.683577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.683606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.683977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.684009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.684375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.684404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.684783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.684815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.685068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.685097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.685452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.685483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.685840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.685871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.686250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.686279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.686516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.686545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.686949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.686980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.935 [2024-11-06 15:41:58.687317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.935 [2024-11-06 15:41:58.687347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.935 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.687718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.687756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.688105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.688133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.688497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.688525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.688877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.688907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.689272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.689306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.689662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.689691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.690060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.690092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.690521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.690550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.690884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.690917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.691282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.691319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.691700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.691728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.692093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.692123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.692492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.692521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.692890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.692921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.693283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.693312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.693674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.693702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.694057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.694087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.694441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.694470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.694826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.694855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.695229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.695260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.695626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.695656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.696022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.696053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.696415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.696444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.696814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.696846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.697242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.697273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.697504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.697536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.697885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.697916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.698248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.698279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.698646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.698676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.698866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.698897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.699273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.699302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.699681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.699711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.700113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.700143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.700385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.700414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.700842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.700873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.701198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.701228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.701586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.701622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.701985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.702016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.936 qpair failed and we were unable to recover it. 00:29:40.936 [2024-11-06 15:41:58.702381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.936 [2024-11-06 15:41:58.702411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.702781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.702811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.703173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.703204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.703594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.703625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.704009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.704041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.704404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.704434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.704774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.704804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.705045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.705074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.705316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.705345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.705709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.705737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.706081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.706111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.706469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.706499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.706859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.706890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.707265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.707294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.707633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.707664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.708004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.708035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.708409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.708438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.708801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.708831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.709192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.709221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.709587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.709617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.709969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.710000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.710361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.710390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.710766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.710797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.711174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.711205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.711450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.711481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.711831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.711864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.712236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.712265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.712624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.712652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.713023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.713055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.713419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.713448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.713811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.713841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.714218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.714247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.714607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.714637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.714990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.715020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.715252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.715284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.715631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.715662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.715943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.715973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.716326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.716357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.716722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.716759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.937 qpair failed and we were unable to recover it. 00:29:40.937 [2024-11-06 15:41:58.717121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.937 [2024-11-06 15:41:58.717151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.717517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.717546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.717908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.717938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.718306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.718338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.718684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.718712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.719057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.719088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.719448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.719478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.719833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.719864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.720210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.720241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.720603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.720632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.721001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.721031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.721452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.721481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.721834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.721865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.722241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.722270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.722636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.722666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.723029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.723059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.723428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.723459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.723816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.723848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.724233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.724263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.724619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.724648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.725004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.725035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.725414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.725445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.725812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.725841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.726218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.726247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.726618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.726648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.726893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.726923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.727293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.727322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.727658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.727693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.728143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.728176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.728551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.728580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.728932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.728965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.729238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.729269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.729620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.729648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.730016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.730047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.730414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.730445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.730757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.730788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.731251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.731282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.731630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.731661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.732032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.938 [2024-11-06 15:41:58.732064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.938 qpair failed and we were unable to recover it. 00:29:40.938 [2024-11-06 15:41:58.732443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.732473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.732832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.732862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.733214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.733244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.733640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.733669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.733933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.733966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.734369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.734400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.734761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.734793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.735159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.735188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.735546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.735575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.735926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.735957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.736324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.736353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.736715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.736744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.737120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.737148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.737498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.737528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.737899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.737931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.738297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.738340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.738711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.738740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.739091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.739121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.739491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.739520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.739888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.739920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.740290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.740319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.740676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.740703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.741060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.741090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.741397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.741428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.741847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.741877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.742230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.742259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.742567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.742596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.742969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.742999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.743329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.743359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.743768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.743799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.744164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.744193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.744543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.744572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.744927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.744957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.745294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.745323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.939 qpair failed and we were unable to recover it. 00:29:40.939 [2024-11-06 15:41:58.745682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.939 [2024-11-06 15:41:58.745711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.746059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.746089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.746447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.746476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.746822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.746854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.747224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.747253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.747625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.747654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.748015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.748049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.748404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.748434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.748771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.748801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.749050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.749081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.749346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.749374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.749722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.749759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.750104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.750133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.750496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.750525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.750881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.750912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.751277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.751306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.751667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.751695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.752140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.752172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.752531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.752561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.752919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.752950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.753302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.753332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.753692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.753720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.754114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.754146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.754488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.754517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.754867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.754898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.755168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.755198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.755549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.755577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.755957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.755987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.756344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.756373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.756708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.756736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.757108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.757137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.757510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.757540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.757910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.757940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.758173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.758205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.758582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.758612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.758971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.759002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.759365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.759395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.759729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.759780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.760121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.940 [2024-11-06 15:41:58.760152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.940 qpair failed and we were unable to recover it. 00:29:40.940 [2024-11-06 15:41:58.760520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.760550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.760904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.760934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.761291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.761319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.761772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.761802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.762152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.762181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.762529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.762559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.762902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.762933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.763335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.763364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.763719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.763768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.764138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.764167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.764524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.764561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.764904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.764934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.765304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.765333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.765696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.765725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.766091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.766121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.766383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.766415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.766652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.766685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.767049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.767081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.767442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.767471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.767837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.767868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.768218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.768246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.768499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.768528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.768886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.768916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.769293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.769322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.769689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.769718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.770080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.770112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.770462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.770491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.770876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.770906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.771295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.771325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.771779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.771812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.772143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.772172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.772537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.772566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.772933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.772964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.773322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.773351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.773606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.773640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.773896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.773926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.774266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.774296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.774649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.774684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.941 [2024-11-06 15:41:58.775050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.941 [2024-11-06 15:41:58.775081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.941 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.775441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.775472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.775855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.775887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.776247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.776276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.776636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.776665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.777030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.777061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.777420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.777449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.777807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.777837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.778186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.778216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.778577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.778607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.778847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.778879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.779266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.779296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.779679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.779709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.780052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.780083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.780450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.780482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.780874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.780905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.781269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.781298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.781662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.781690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.782060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.782090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.782449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.782477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.782734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.782775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.783148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.783178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.783545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.783575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.783934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.783964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.784306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.784335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.784696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.784727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.785159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.785197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.785572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.785601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.786051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.786082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.786440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.786471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.786845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.786876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.787237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.787267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.787630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.787659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.788016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.788046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.788378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.788408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.788740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.788786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.789186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.789216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.789548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.789576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.789937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.789968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.942 qpair failed and we were unable to recover it. 00:29:40.942 [2024-11-06 15:41:58.790333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.942 [2024-11-06 15:41:58.790363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.790721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.790758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.791115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.791145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.791503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.791532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.791981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.792013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.792291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.792321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.792669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.792699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.793052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.793084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.793418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.793448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.793809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.793841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.794180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.794210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.794572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.794601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.794957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.794987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.795252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.795281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.795552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.795582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.795934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.795965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.796328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.796357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.796809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.796839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.797296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.797325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.797738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.797778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.798124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.798152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.798514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.798545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.798901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.798932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.799293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.799321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.799676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.799707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.800057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.800088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.800454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.800483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.800847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.800878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.801260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.801290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.801640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.801670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.802007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.802038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.802403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.802432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.802691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.802721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.803124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.803156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.943 [2024-11-06 15:41:58.803521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.943 [2024-11-06 15:41:58.803550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.943 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.803913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.803951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.804330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.804359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.804731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.804769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.805041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.805070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.805419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.805449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.805817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.805846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.806208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.806237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.806599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.806628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.807003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.807032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.807394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.807426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.807791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.807823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.808203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.808232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.808583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.808613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.808977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.809007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.809375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.809404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.809785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.809815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.810177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.810208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.810563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.810592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.810982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.811013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.811371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.811400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.811650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.811690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.812088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.812119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.812487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.812516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.812794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.812824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.813219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.813251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.813586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.813616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.813974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.814004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.814324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.814355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.814693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.814721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.815094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.815125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.815439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.815468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.815732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.815771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.816131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.816160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.816519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.816549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.816910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.816943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.817292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.817322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.817691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.817720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.818104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.818135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.944 qpair failed and we were unable to recover it. 00:29:40.944 [2024-11-06 15:41:58.818491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.944 [2024-11-06 15:41:58.818523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.818868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.818899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.819272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.819300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.819731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.819779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.820177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.820206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.820457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.820488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.820838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.820868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.821321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.821349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.821695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.821724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.822096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.822131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.822491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.822522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.822792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.822823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.823094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.823122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.823476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.823505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.823860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.823892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.824269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.824300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.824654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.824684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.825056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.825088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.825460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.825491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.825849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.825879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.826244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.826273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.826629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.826658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.827052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.827082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.827448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.827476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.827849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.827880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.828243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.828273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.828628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.828657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.828998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.829030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.829240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.829268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.829662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.829696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.830074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.830105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.830469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.830498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.830843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.830873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.831227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.831257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.831634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.831663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.832000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.832031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.832390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.832420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.832781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.832812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.833165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.945 [2024-11-06 15:41:58.833193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.945 qpair failed and we were unable to recover it. 00:29:40.945 [2024-11-06 15:41:58.833559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.833588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.833960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.833989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.834238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.834266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.834606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.834642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.835034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.835065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.835401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.835431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.835833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.835863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.836234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.836264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.836506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.836537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.836899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.836930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.837306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.837335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.837696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.837725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.837960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.837990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.838376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.838405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.838766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.838798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.839175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.839205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.839561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.839589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.839981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.840010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.840384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.840413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.840858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.840889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.841233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.841266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.841620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.841650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.841987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.842017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.842385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.842414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.842774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.842804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.843199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.843229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.843639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.843669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.843893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.843926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.844308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.844338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.844683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.844713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.844976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.845007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.845382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.845414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.845763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.845795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.846194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.846223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.846567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.846606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.846972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.847004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.847347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.847383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.847559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.847590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.848057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.946 [2024-11-06 15:41:58.848096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.946 qpair failed and we were unable to recover it. 00:29:40.946 [2024-11-06 15:41:58.848371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.848401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.848654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.848684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.849072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.849102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.849465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.849494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.849859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.849889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.850250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.850282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.850643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.850673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.851035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.851066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.851311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.851340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.851705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.851735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.852111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.852142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.852505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.852535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.852937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.852969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.853312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.853343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.853679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.853708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.854172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.854202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.854439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.854469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.854867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.854898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.855137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.855166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.855523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.855554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.855944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.855974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.856329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.856358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.856735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.856782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.857171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.857201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.857575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.857604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.857977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.858008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.858273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.858309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.858548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.858580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.858986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.859017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.859248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.859281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.859648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.859677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.860037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.860069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.860410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.860446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.860800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.860830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.861112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.861144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.861503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.861534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.861895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.861927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.862173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.862202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.862566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.947 [2024-11-06 15:41:58.862596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.947 qpair failed and we were unable to recover it. 00:29:40.947 [2024-11-06 15:41:58.862862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.862893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.863316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.863346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.863755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.863786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.864010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.864040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.864390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.864419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.864851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.864882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.865247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.865278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.865638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.865667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.866108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.866140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.866524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.866555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.866822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.866853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.867254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.867284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.867696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.867726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.868103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.868134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.868506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.868543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.868889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.868920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.869308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.869339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.869705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.869734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.870001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.870031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.870291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.870325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.870691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.870721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.871036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.871066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.871331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.871361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.871737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.871778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.872125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.872154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.872522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.872552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.872978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.873010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.873263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.873294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.873722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.873766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.874171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.874202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.874562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.874591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.874891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.874922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.875304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.875333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.875706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.875735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.948 qpair failed and we were unable to recover it. 00:29:40.948 [2024-11-06 15:41:58.876130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.948 [2024-11-06 15:41:58.876160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.876318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.876351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.876758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.876790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.877183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.877215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.877596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.877627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.877891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.877923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.878304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.878333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.878608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.878638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.879012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.879045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.879419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.879450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.879819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.879850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.880246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.880277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.880699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.880730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.881130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.881159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.881536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.881565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.881944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.881979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.882232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.882261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.882512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.882541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.882780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.882810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.883356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.883385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.883601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.883630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.883978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.884010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.884369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.884398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.884845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.884877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.885226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.885256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.885548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.885578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.886038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.886069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.886421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.886452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.886712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.886744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.886919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.886947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.887334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.887364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.887647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.887676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.888035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.888066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.888423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.888451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.888822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.888854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.889238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.889267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.889639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.889668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.889936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.889967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.890341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.949 [2024-11-06 15:41:58.890371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.949 qpair failed and we were unable to recover it. 00:29:40.949 [2024-11-06 15:41:58.890715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.890759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.891017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.891048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.891398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.891433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.891817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.891849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.892129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.892157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.892517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.892547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.892906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.892939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.893381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.893411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.893813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.893845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.894210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.894245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.894579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.894609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.895172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.895203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.895613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.895643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.896078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.896108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.896360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.896391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.896767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.896798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.897198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.897229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.897584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.897613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.897875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.897906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.898262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.898291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.898729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.898769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.899138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.899168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:40.950 [2024-11-06 15:41:58.899529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.950 [2024-11-06 15:41:58.899558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:40.950 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-06 15:41:58.899860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-06 15:41:58.899894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-06 15:41:58.900269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-06 15:41:58.900302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-06 15:41:58.900667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-06 15:41:58.900697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-06 15:41:58.901134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-06 15:41:58.901165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-06 15:41:58.901526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-06 15:41:58.901556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-06 15:41:58.901826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-06 15:41:58.901857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.902231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.902261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.902636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.902666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.903074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.903104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.903452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.903482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.903734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.903775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.904159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.904187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.904555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.904583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.905024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.905070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.905323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.905352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.905714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.905743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.906073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.906113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.906375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.906404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.906772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.906803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.907198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.907228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.907585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.907614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.907984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.908016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.908413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.908441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.908794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.908825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.909252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.909282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.909645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.909674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.910092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.910121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.910369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.910401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.910665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.910694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.910968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.910999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.911352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.911382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.911755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.911786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.912152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.912182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.912526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.912557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.912912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.912942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.913307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.913336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.913697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.913727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.914092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.914124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.914505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.914533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.914894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.914925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.915297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.915327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.915577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.915606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.916001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.916032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.916385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-06 15:41:58.916414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-06 15:41:58.916769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.916799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.917175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.917206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.917568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.917597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.917947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.917976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.918348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.918377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.918781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.918812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.918973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.919005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.919190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.919220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.919590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.919620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.919992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.920023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.920388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.920418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.920780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.920810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.921233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.921263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.921637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.921668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.922025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.922055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.922413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.922443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.922805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.922836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.923201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.923230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.923480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.923511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.923872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.923902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.924253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.924281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.924664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.924694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.925099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.925129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.925364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.925393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.925777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.925808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.926185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.926215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.926547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.926576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.926985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.927015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.927370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.927400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.927649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.927680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.928017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.928048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.928226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.928254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.928631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.928663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.929023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.929053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.929460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.929490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.929823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.929854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.930220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.930252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.930632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.930667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.931074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-06 15:41:58.931104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-06 15:41:58.931434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.931462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.931807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.931838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.932184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.932220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.932584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.932614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.933048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.933079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.933444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.933473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.933915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.933945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.934317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.934346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.934697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.934727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.935115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.935146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.935491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.935522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.935874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.935906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.936284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.936313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.936676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.936704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.937076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.937107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.937474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.937506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.937870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.937902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.938276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.938305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.938666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.938694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.939033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.939063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.939299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.939331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.939682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.939713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.940054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.940085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.940461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.940491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.940832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.940863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.941193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.941228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.941575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.941607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.941934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.941965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.942368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.942397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.942761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.942791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.943081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.943110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.943470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.943499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.943861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.943891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.944273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.944304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.944654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.944683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.945048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.945078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.945435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.945465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.945824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.945854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.946205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-06 15:41:58.946234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-06 15:41:58.946606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.946637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.946879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.946913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.947287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.947317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.947687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.947717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.948088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.948120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.948478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.948508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.948867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.948898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.949263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.949292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.949653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.949681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.950038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.950069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.950424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.950453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.950814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.950853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.951280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.951309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.951641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.951676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.952036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.952066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.952473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.952503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.952871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.952901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.953254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.953293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.953653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.953682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.953985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.954015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.954385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.954414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.954720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.954779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.955035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.955063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.955386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.955414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.955792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.955824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.956093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.956124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.956470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.956500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.956860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.956891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.957252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.957281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.957640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.957671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.958021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.958052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.958366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.958403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.958770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.958801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.959069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.959098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.959475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.959503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.959857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.959888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.960148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.960177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.960534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.960565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.960821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.227 [2024-11-06 15:41:58.960851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.227 qpair failed and we were unable to recover it. 00:29:41.227 [2024-11-06 15:41:58.961108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.961137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.961482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.961510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.961877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.961908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.962282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.962311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.962678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.962708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.963096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.963127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.963489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.963517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.963853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.963884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.964268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.964297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.964658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.964687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.965088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.965120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.965470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.965498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.965873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.965903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.966245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.966276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.966679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.966710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.967095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.967129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.967479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.967508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.967923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.967955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.968290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.968320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.968690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.968721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.969076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.969107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.969478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.969509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.969889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.969919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.970156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.970184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.970540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.970569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.970799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.970829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.971209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.971239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.971606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.971636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.972009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.972040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.972405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.972435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.228 [2024-11-06 15:41:58.972806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.228 [2024-11-06 15:41:58.972836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.228 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.973184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.973213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.973566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.973596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.973950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.973980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.974342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.974372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.974709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.974738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.975120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.975149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.975409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.975437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.975861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.975893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.976262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.976291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.976655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.976683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.977053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.977083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.977442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.977476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.977833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.977865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.978211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.978241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.978607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.978635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.978992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.979021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.979389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.979418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.979777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.979807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.980191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.980219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.980560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.980590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.980958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.980988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.981447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.981476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.981836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.981866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.982250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.982278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.982640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.982669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.983043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.983074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.983460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.983489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.983926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.983958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.984320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.984349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.984655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.984684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.985044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.985074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.985432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.985461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.985842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.985871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.986231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.986262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.986611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.986642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.986983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.987014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.987375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.987404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.987768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.987799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.229 [2024-11-06 15:41:58.988057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.229 [2024-11-06 15:41:58.988095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.229 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.988468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.988497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.988862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.988893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.989263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.989293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.989641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.989670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.990017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.990048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.990417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.990446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.990808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.990837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.991296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.991325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.991645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.991674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.991954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.991990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.992367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.992397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.992760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.992792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.993155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.993184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.993547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.993577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.994007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.994038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.994391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.994420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.994773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.994803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.995067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.995096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.995444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.995473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.995836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.995869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.996239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.996269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.996631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.996659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.997019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.997048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.997403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.997432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.997686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.997715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.998098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.998128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.998492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.998521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.998979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.999010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.999377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.999407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:58.999652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:58.999681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:59.000039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:59.000070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:59.000425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:59.000457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:59.000818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:59.000849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:59.001236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:59.001265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:59.001627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:59.001655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:59.002010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:59.002041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:59.002402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:59.002432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:59.002804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:59.002835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.230 [2024-11-06 15:41:59.003173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.230 [2024-11-06 15:41:59.003202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.230 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.003440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.003472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.003823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.003855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.004103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.004131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.004404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.004433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.004865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.004897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.005561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.005597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.005948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.005985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.006343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.006373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.006713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.006759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.007122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.007151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.007521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.007551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.007914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.007944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.008293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.008322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.008671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.008702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.009057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.009086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.009449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.009481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.009841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.009872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.010171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.010201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.010564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.010593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.010904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.010933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.011306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.011335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.011693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.011723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.012168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.012203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.012575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.012605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.012935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.012967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.013352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.013381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.013758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.013791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.014138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.014167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.014535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.014570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.014953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.014985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.015372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.015401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.015768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.231 [2024-11-06 15:41:59.015798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.231 qpair failed and we were unable to recover it. 00:29:41.231 [2024-11-06 15:41:59.016141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.016172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.016552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.016581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.016937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.016967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.017327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.017356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.017657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.017686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.018043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.018074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.018485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.018515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.018853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.018883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.019257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.019285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.019529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.019557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.019910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.019942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.020303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.020332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.020689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.020718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.021066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.021098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.021455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.021485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.021873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.021905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.022261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.022290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.022662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.022691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.022943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.022974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.023348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.023377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.023729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.023771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.024173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.024202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.024536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.024565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.024816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.024852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.025232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.025262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.025634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.025663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.026030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.026061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.026419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.026449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.026803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.026833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.027188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.027217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.027526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.027557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.027918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.027948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.028391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.028420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.028719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.028757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.029110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.029139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.029504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.029534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.029891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.029922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.030286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.030316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.232 [2024-11-06 15:41:59.030676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.232 [2024-11-06 15:41:59.030705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.232 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.031065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.031095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.031454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.031483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.031845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.031877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.032227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.032256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.032614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.032643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.032987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.033016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.033432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.033462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.033817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.033848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.034199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.034235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.034550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.034580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.034824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.034854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.035206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.035241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.035601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.035630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.036088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.036118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.036453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.036481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.036824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.036856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.037245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.037275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.037521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.037549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.037906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.037938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.038295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.038323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.038686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.038716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.039088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.039119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.039457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.039486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.039732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.039782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.040172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.040201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.040548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.040579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.040825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.040857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.041200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.041231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.041611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.041639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.041998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.042029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.042402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.042431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.042802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.042832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.043191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.043219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.233 [2024-11-06 15:41:59.043598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.233 [2024-11-06 15:41:59.043628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.233 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.043974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.044006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.044363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.044393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.044681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.044711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.045023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.045053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.045411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.045441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.045808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.045841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.046205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.046235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.046574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.046603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.047024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.047054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.047403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.047433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.047812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.047844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.048205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.048235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.048576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.048605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.048974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.049004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.049364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.049393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.049778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.049810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.050151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.050182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.050433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.050462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.050823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.050853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.051214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.051243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.051608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.051637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.051999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.052029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.052361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.052392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.052761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.052793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.053163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.053193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.053552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.053581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.053933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.053964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.054323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.054352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.054717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.054775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.055140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.055170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.055537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.055566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.055930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.055961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.056305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.056334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.056696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.056726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.057030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.057060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.057442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.057471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.057834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.057865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.058235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.058264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.234 [2024-11-06 15:41:59.058606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.234 [2024-11-06 15:41:59.058636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.234 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.058979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.059011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.059363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.059392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.059779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.059810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.060108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.060137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.060384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.060413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.060766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.060796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.061146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.061189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.061562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.061591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.062063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.062094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.062433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.062463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.062821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.062851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.063204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.063233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.063598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.063627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.064074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.064104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.064443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.064474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.064827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.064857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.065218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.065247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.065618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.065649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.065999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.066029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.066391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.066421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.066785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.066816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.067174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.067202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.067573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.067603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.067908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.067939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.068309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.068339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.068706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.068734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.069082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.069111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.069476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.069505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.069886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.069917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.070310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.070341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.070711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.070740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.071102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.071132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.071470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.071499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.071862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.071898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.072249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.072279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.072623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.072653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.073037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.073067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.073432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.073461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.235 qpair failed and we were unable to recover it. 00:29:41.235 [2024-11-06 15:41:59.073720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.235 [2024-11-06 15:41:59.073762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.074150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.074180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.074546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.074577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.074940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.074972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.075334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.075363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.075717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.075756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.075989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.076022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.076400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.076430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.076802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.076833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.077181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.077211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.077578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.077608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.077985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.078016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.078381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.078410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.078785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.078816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.079218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.079247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.079611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.079640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.079911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.079941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.080389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.080422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.080766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.080798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.081198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.081228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.081515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.081544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3967548 Killed "${NVMF_APP[@]}" "$@" 00:29:41.236 [2024-11-06 15:41:59.081925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.081958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.082316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.082346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:41.236 [2024-11-06 15:41:59.082712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.082741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.083003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:41.236 [2024-11-06 15:41:59.083033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:41.236 [2024-11-06 15:41:59.083378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.083408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:41.236 [2024-11-06 15:41:59.083775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.083807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.084151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.084181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.084561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.084590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.084927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.084959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.085324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.085353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.085720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.085758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.086119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.086148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.086387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.086426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.086817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.086848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.087200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.087228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.236 [2024-11-06 15:41:59.087584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.236 [2024-11-06 15:41:59.087615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.236 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.087958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.087988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.088223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.088252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.088512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.088542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.088907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.088938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.089347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.089376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.089731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.089770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.090153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.090183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.090424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.090452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.090818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.090849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.091237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.091266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.091708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.091737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3968432 00:29:41.237 [2024-11-06 15:41:59.092171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.092201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3968432 00:29:41.237 [2024-11-06 15:41:59.092556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.092585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:41.237 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3968432 ']' 00:29:41.237 [2024-11-06 15:41:59.093016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.237 [2024-11-06 15:41:59.093047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:41.237 [2024-11-06 15:41:59.093411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.093441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.237 [2024-11-06 15:41:59.093673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.093703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:41.237 [2024-11-06 15:41:59.093964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.093995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 wit 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:41.237 h addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.094352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.094383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.094616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.094650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.095048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.095080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.095456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.095487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.095862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.095893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.096256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.096286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.096649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.096679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.097068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.097100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.097364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.097395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.097837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.097868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.098255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.098286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.098718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.237 [2024-11-06 15:41:59.098761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.237 qpair failed and we were unable to recover it. 00:29:41.237 [2024-11-06 15:41:59.099131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.099162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.099524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.099554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.099924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.099956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.100320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.100357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.100715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.100765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.101136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.101168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.101509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.101540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.101903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.101935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.102301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.102333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.102695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.102725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.103119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.103151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.103515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.103549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.104009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.104042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.104394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.104425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.104831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.104863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.105259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.105294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.105649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.105681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.106045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.106078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.106437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.106468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.106820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.106853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.107233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.107266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.107639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.107669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.107940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.107975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.108390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.108422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.108784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.108815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.109227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.109256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.109686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.109715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.110087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.110118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.110479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.110510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.110769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.110800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.111181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.111219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.111471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.111501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.111848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.111883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.112257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.112289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.112649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.112678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.113051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.113082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.113372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.113401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.113741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.113799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.114070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.238 [2024-11-06 15:41:59.114099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.238 qpair failed and we were unable to recover it. 00:29:41.238 [2024-11-06 15:41:59.114458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.114491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.114871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.114903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.115275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.115304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.115545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.115576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.115922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.115953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.116342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.116374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.116620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.116652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.117045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.117076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.117448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.117478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.117770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.117800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.118238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.118268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.118684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.118713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.119172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.119204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.119562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.119592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.119837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.119871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.120256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.120287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.120677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.120706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.121076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.121106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.121374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.121409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.121794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.121827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.122235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.122264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.122649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.122680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.123164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.123194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.123562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.123594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.123995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.124027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.124386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.124416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.124794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.124825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.125165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.125194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.125554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.125584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.125894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.125925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.126179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.126212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.126558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.126589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.126834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.126866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.127251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.127281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.127648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.127678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.128042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.128074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.128492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.128524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.128878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.239 [2024-11-06 15:41:59.128909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.239 qpair failed and we were unable to recover it. 00:29:41.239 [2024-11-06 15:41:59.129282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.129313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.129699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.129729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.130020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.130050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.130500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.130532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.130904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.130938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.131375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.131404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.131785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.131818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.132214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.132245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.132628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.132659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.133058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.133090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.133464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.133494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.133960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.133992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.134351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.134381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.134817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.134848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.135207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.135239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.135638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.135669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.136044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.136075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.136446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.136475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.136849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.136881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.137300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.137330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.137693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.137724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.138108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.138149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.138508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.138538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.138894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.138925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.139284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.139313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.139690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.139721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.140079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.140111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.140493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.140523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.140818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.140851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.141252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.141281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.141641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.141671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.141942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.141971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.142341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.142372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.142760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.142793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.143167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.143197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.143581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.143611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.143997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.144027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.144424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.144455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.144828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.240 [2024-11-06 15:41:59.144861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.240 qpair failed and we were unable to recover it. 00:29:41.240 [2024-11-06 15:41:59.147370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.147444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.147857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.147897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.148175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.148206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.148618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.148648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.149077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.149110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.149455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.149488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.149834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.149865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.150245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.150275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.150650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.150678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.151045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.151085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.151362] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:29:41.241 [2024-11-06 15:41:59.151433] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.241 [2024-11-06 15:41:59.151452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.151484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.151848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.151878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.152255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.152285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.152651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.152682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.153092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.153125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.153478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.153508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.153790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.153822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.154219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.154250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.154614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.154645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.155009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.155042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.155406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.155438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.155770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.155802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.156174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.156205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.156580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.156611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.157064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.157096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.157459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.157491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.157787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.157819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.158095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.158125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.158515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.158545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.158916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.158948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.159318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.159348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.159723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.159785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.160142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.160172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.160553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.160583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.160866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.160898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.161285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.161321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.161674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.161706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.162035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.241 [2024-11-06 15:41:59.162072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.241 qpair failed and we were unable to recover it. 00:29:41.241 [2024-11-06 15:41:59.162420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.162451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.162798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.162830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.163209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.163240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.163608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.163639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.164029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.164062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.164327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.164359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.164721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.164764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.165180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.165209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.165572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.165604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.165979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.166011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.166252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.166285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.166652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.166684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.167056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.167090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.167355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.167386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.167624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.167658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.167954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.167985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.168369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.168400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.168777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.168810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.169217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.169247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.169590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.169621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.170001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.170033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.170382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.170412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.170777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.170810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.171158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.171188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.171557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.171593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.172017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.172048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.172407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.172437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.172805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.172836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.173158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.173189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.173444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.173478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.173822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.173900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.174275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.174306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.174701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.174731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.175122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.242 [2024-11-06 15:41:59.175154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.242 qpair failed and we were unable to recover it. 00:29:41.242 [2024-11-06 15:41:59.175507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.175538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.175894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.175925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.176108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.176137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.176521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.176549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.176806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.176838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.177206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.177237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.177603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.177635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.177992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.178025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.178388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.178417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.178785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.178816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.179202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.179232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.179617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.179646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.179808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.179840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.180076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.180108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.180451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.180481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.180851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.180882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.181253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.181282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.181665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.181701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.182132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.182164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.182517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.182547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.182884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.182917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.183299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.183328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.183578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.183607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.184081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.184112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.184349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.184378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.184613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.184645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.185014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.185045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.185413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.185443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.185810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.185841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.186198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.186228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.186592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.186623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.186973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.187006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.187372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.187401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.187771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.187802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.188174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.188203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.188588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.188617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.188973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.189005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.189384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.189414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.189794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.243 [2024-11-06 15:41:59.189824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.243 qpair failed and we were unable to recover it. 00:29:41.243 [2024-11-06 15:41:59.190098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.244 [2024-11-06 15:41:59.190126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.244 qpair failed and we were unable to recover it. 00:29:41.244 [2024-11-06 15:41:59.190393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.244 [2024-11-06 15:41:59.190426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.244 qpair failed and we were unable to recover it. 00:29:41.244 [2024-11-06 15:41:59.190790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.244 [2024-11-06 15:41:59.190822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.244 qpair failed and we were unable to recover it. 00:29:41.244 [2024-11-06 15:41:59.191217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.244 [2024-11-06 15:41:59.191247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.244 qpair failed and we were unable to recover it. 00:29:41.244 [2024-11-06 15:41:59.191602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.244 [2024-11-06 15:41:59.191632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.244 qpair failed and we were unable to recover it. 00:29:41.244 [2024-11-06 15:41:59.192014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.244 [2024-11-06 15:41:59.192045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.244 qpair failed and we were unable to recover it. 00:29:41.244 [2024-11-06 15:41:59.192421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.244 [2024-11-06 15:41:59.192450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.244 qpair failed and we were unable to recover it. 00:29:41.244 [2024-11-06 15:41:59.192830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.244 [2024-11-06 15:41:59.192860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.244 qpair failed and we were unable to recover it. 00:29:41.244 [2024-11-06 15:41:59.193129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.244 [2024-11-06 15:41:59.193161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.244 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-06 15:41:59.193574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-06 15:41:59.193606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-06 15:41:59.194048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-06 15:41:59.194081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-06 15:41:59.194238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-06 15:41:59.194265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-06 15:41:59.194648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-06 15:41:59.194677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-06 15:41:59.195061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-06 15:41:59.195092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-06 15:41:59.195333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-06 15:41:59.195363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.195818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.195849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.196108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.196139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.196517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.196546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.196896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.196926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.197277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.197306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.197568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.197597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.197951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.197980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.198380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.198411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.198729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.198772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.199200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.199228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.199473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.199501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.199793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.199825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.200200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.200229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.200487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.200517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.200885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.200916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.201360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.201389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.201766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.201796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.202158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.202188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.202447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.202479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.202849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.202879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.203247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.203276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.203703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.203732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.204187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.204217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.204587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.204617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.204998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.205029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.205404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.205434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.205701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.205730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.206069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.206099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.206548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.206576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.206836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.206866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.207221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.207252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.207616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.207652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.208011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.208042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.208391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.208420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.208778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.208808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.209083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.209112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.209494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.209523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.209888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-06 15:41:59.209920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-06 15:41:59.210285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.210314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.210681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.210710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.210976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.211008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.211408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.211438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.211806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.211836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.212212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.212241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.212614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.212645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.212891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.212923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.213277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.213306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.213669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.213699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.214066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.214096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.214354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.214383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.214757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.214789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.215146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.215177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.215546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.215576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.215929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.215959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.216327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.216356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.216728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.216770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.217113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.217143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.217511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.217540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.217797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.217833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.218117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.218148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.218592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.218621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.218997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.219028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.219396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.219425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.219816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.219854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.220191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.220220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.220594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.220622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.221007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.221038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.221385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.221415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.221684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.221713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.222077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.222109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.222484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.222513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.222882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.222914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.223287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.223317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.223681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.223711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.224132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.224162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.224400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.224428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-06 15:41:59.224779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-06 15:41:59.224811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.225190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.225217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.225608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.225637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.225990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.226021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.226391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.226421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.226671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.226700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.227172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.227203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.227552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.227581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.227955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.227987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.228350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.228379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.228757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.228788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.229151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.229181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.229543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.229572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.229911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.229940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.230198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.230227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.230636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.230666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.230972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.231002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.231182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.231212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.231574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.231603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.231974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.232005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.232352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.232381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.232597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.232626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.232971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.233001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.233250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.233279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.233653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.233682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.234052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.234083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.234445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.234474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.234843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.234873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.235261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.235290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.235554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.235584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.235932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.235963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.236224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.236256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.236508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.236538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.236899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.236929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.237305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.237334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.237706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.237735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.237970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.237999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.238283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.238312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.238661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.238691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-06 15:41:59.239077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-06 15:41:59.239108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.239466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.239495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.239868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.239898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.240287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.240318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.240691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.240720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.241069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.241100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.241353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.241383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.241628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.241657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.242003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.242033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.242402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.242430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.242805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.242836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.243228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.243264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.243703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.243732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.244115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.244144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.244375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.244404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.244781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.244813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.245162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.245192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.245594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.245624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.245984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.246014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.246390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.246419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.246796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.246828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.247217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.247246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.247557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.247585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.247879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.247910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.248331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.248361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.248722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.248765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.249141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.249171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.249419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.249451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.249702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.249733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.250091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.250123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.250495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.250525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.250953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.250985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.251302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.251331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.251700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.251731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.252113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.252145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.252420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.252449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.252817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.252848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.253213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.253241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.253612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.253648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-06 15:41:59.253959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-06 15:41:59.253990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.254377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.254407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.254656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.254687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.254924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.254957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.255337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.255366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.255758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.255791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.256152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.256182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.256546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.256575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.256967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.256998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.257361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.257389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.257762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.257792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.258162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.258191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.258221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:41.523 [2024-11-06 15:41:59.258563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.258593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.258864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.258897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.259139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.259168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.259629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.259658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.260041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.260071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.260422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.260451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.260839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.260871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.261228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.261266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.261672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.261701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.262059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.262090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.262449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.262478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.262917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.262950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.263349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.263378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.263645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.263676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.263984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.264022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.264371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.264401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.264876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.264909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.265220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.265250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-06 15:41:59.265661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-06 15:41:59.265690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.265821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.265850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.266146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.266175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.266560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.266589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.266992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.267022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.267395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.267426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.267803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.267833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.268185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.268216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.268586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.268614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.268887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.268919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.269289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.269319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.269691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.269721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.270098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.270129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.270513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.270543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.270924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.270955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.271318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.271347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.271715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.271758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.272009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.272041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.272312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.272341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.272716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.272758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.273036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.273067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.273414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.273443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.273831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.273863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.274237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.274273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.274695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.274725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.275116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.275146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.275389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.275418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.275802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.275833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.276092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.276121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.276528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.276559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.276930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.276962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.277342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.277371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.277710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.277739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.278019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.278049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.278398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.278427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.278793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.278825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.279195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.279224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.279583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.279612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.279968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-06 15:41:59.279999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-06 15:41:59.280364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.280392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.280770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.280801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.281176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.281205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.281572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.281603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.281975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.282005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.282305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.282336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.282706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.282737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.283109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.283139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.283512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.283540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.283875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.283907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.284127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.284156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.284591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.284627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.284799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.284829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.285218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.285246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.285541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.285571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.285915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.285946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.286327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.286355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.286618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.286651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.286927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.286957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.287388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.287417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.287776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.287809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.288205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.288233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.288607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.288636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.288967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.288998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.289406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.289434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.289881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.289913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.290303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.290333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.290646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.290675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.291040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.291070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.291478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.291507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.291867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.291898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.292260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.292290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.292656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.292685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.293053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.293084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.293420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.293450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.293791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.293828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.294180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.294211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.294588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-06 15:41:59.294617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-06 15:41:59.294999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.295029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.295277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.295306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.295665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.295694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.296057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.296086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.296442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.296472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.296831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.296862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.297233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.297263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.297619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.297648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.297900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.297930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.298169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.298201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.298548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.298579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.298948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.298979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.299288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.299319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.299665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.299696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.300071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.300102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.300450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.300480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.300835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.300867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.301285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.301314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.301695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.301727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.302126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.302156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.302489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.302517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.302731] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.526 [2024-11-06 15:41:59.302783] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.526 [2024-11-06 15:41:59.302790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.526 [2024-11-06 15:41:59.302795] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.526 [2024-11-06 15:41:59.302800] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.526 [2024-11-06 15:41:59.302841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.302871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.303100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.303128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.303522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.303553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.303929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.303960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.304329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.304357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.304703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.304733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.304809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:41.526 [2024-11-06 15:41:59.305029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:41.526 [2024-11-06 15:41:59.305170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.305200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.305173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:41.526 [2024-11-06 15:41:59.305174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:41.526 [2024-11-06 15:41:59.305588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.305618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.305976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.306009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.306249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.306278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.306625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.306653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.307007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.307037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.307398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.307427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.307773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.307806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-06 15:41:59.308145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-06 15:41:59.308175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.308548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.308577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.308947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.308977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.309339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.309369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.309736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.309780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.310128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.310157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.310539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.310567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.310942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.310974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.311239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.311267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.311531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.311564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.311895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.311925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.312102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.312135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.312488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.312517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.312793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.312822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.313055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.313084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.313422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.313451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.313702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.313731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.314118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.314148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.314400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.314428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.314677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.314706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.315109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.315140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.315493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.315522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.315856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.315888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.316137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.316168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.316394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.316423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.316686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.316716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.317014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.317045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.317165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.317192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.317468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.317496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.317856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.317887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.318114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.318143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.318378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.318411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.318776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.318807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.319051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.319080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.319422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.319450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.319808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.319840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-06 15:41:59.319971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-06 15:41:59.320001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.320283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.320313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.320562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.320591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.320813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.320843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.321198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.321226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.321590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.321620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.321873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.321903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.322152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.322181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.322585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.322614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.322868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.322899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.323263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.323291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.323647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.323678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.324058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.324090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.324426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.324455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.324826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.324859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.325218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.325248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.325524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.325553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.325914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.325945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.326304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.326333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.326607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.326636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.327003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.327033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.327290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.327325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.327693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.327723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.327981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.328015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.328418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.328448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.328669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.328697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.329066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.329098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.329323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.329351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.329730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.329774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.330112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.330142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.330460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.330490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.330851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.330882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.331252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.331280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.331705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.331733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-06 15:41:59.332093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-06 15:41:59.332122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.332470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.332501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.332854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.332885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.333227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.333257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.333618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.333648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.333791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.333820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.334020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.334049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.334404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.334433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.334720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.334762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.335117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.335146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.335499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.335529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.335744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.335796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.336161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.336190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.336632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.336661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.337009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.337048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.337317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.337348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.337698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.337726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.338115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.338145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.338367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.338396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.338764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.338795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.339198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.339228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.339454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.339483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.339836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.339867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.340239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.340269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.340630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.340660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.340894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.340926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.341282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.341312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.341586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.341617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.341990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.342022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.342271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.342299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.342681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.342709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.343076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.343105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.343475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.343505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.343921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.343953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.344192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.344220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.344479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.344513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.344849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.344880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-06 15:41:59.344980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-06 15:41:59.345007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 Read completed with error (sct=0, sc=8) 00:29:41.529 starting I/O failed 00:29:41.529 Read completed with error (sct=0, sc=8) 00:29:41.529 starting I/O failed 00:29:41.529 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Write completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Write completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Write completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Write completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Write completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Write completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Write completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Write completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 Read completed with error (sct=0, sc=8) 00:29:41.530 starting I/O failed 00:29:41.530 [2024-11-06 15:41:59.345852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.530 [2024-11-06 15:41:59.346232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.346293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.346693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.346724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.347066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.347167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.347627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.347664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.347885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.347917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.348290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.348320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.348581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.348610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.348975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.349006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.349385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.349414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.349787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.349816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.350185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.350216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.350358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.350386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.350481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.350508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.350785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.350815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.351199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.351229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.351591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.351620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.351979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.352009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.352225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.352253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.352667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.352697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.353103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.353134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.353481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.353510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.353753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.353784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.354039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.354073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.354309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.354346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.354760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.354794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.355006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.355034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-06 15:41:59.355402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-06 15:41:59.355433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.355661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.355690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.356083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.356115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.356489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.356518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.356884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.356914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.357362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.357391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.357648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.357679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.357920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.357951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.358322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.358350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.358707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.358738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.359001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.359031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.359260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.359291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.359644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.359674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.360115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.360147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.360371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.360400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.360766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.360798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.361147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.361177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.361521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.361550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.361900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.361932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.362310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.362341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.362679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.362708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.363115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.363147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.363483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.363514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.363891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.363923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.364272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.364303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.364546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.364579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.364952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.364983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.365188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.365216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.365565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.365595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.365839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.365870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.366230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.366260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.366662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.366692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.367047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.367077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.367418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.367447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.367698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.367731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.368155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.368185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.368403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-06 15:41:59.368433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-06 15:41:59.368715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.368758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.369212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.369243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.369593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.369623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.370064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.370094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.370449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.370478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.370857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.370887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.371094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.371123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.371374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.371403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.371792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.371822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.372093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.372122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.372501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.372531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.372894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.372926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.373304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.373333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.373584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.373613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.373995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.374026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.374288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.374317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.374513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.374544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.374891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.374922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.375269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.375297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.375645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.375675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.375901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.375931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.376289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.376319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.376671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.376702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.377074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.377104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.377459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.377488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.377871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.377903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.378282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.378314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.378540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.378569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.378924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.378955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.379214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.379242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.379594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-06 15:41:59.379623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-06 15:41:59.379966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.380000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.380361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.380390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.380772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.380802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.381166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.381195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.381545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.381575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.381848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.381878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.382233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.382263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.382630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.382659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.382765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.382796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.383052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.383088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.383469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.383500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.383877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.383908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.384283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.384314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.384404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.384431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Write completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Write completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Write completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Write completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Write completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Write completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Write completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Write completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Write completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Write completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Write completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Write completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Write completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Write completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Write completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Write completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 [2024-11-06 15:41:59.385239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:41.533 [2024-11-06 15:41:59.385548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.385610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.385742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.385786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.386309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.386406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.386701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.386738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.387207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.387302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.387507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.387541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.388018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.388114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.388439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.388477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.388730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.388776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.389161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.389193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.389543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.389578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.389787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.389820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.390193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-06 15:41:59.390223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-06 15:41:59.390450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.390481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.390723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.390771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.391027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.391059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.391416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.391445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.391804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.391835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.392192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.392222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.392438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.392468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.392875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.392907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.393262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.393291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.393664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.393695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.394075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.394107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.394481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.394511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.394872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.394902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.395240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.395269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.395678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.395708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.395971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.396008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.396362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.396395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.396772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.396806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.397178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.397206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.397566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.397597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.397689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.397717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.397872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.397904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.398126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.398154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.398527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.398558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.398770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.398802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.399152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.399182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.399559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.399588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.399977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.400009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.400358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.400389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.400638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.400669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.401001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.401032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.401257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.401288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.401513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.401546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.401790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.401823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.402166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.402195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.402495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-06 15:41:59.402524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-06 15:41:59.402611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.402640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.402877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132cf30 is same with the state(6) to be set 00:29:41.535 [2024-11-06 15:41:59.403217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.403310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.403716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.403768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.404196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.404292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.404654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.404692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.405157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.405254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.405584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.405626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.405795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.405826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.406172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.406201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.406579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.406612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.407021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.407052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.407278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.407307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.407665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.407696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.408151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.408182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.408542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.408571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.408934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.408966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.409227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.409257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.409619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.409649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.409878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.409908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.410162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.410192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.410584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.410614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.410840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.410871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.411226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.411255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.411469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.411503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.411775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.411806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.412177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.412207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.412441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.412472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.412824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.412855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.413236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.413266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.413632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.413663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.414033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.414068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.414430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.414460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.414691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.414729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.414974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.415005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.415358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.415389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.415772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.415806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.535 [2024-11-06 15:41:59.416050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.535 [2024-11-06 15:41:59.416080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.535 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.416321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.416354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.416786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.416821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.417214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.417247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.417400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.417430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.417673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.417702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.418040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.418071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.418279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.418308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.418527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.418556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.418821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.418853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.419200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.419232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.419608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.419638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.420006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.420037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.420411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.420441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.420701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.420733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.420956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.420986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.421107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.421136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.421491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.421522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.421770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.421799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.422075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.422106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.422470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.422499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.422858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.422890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.423249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.423281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.423658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.423689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.424025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.424057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.424396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.424428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.424803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.424835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.425200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.425228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.425577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.425606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.425989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.426020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.426377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.426409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.426755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.426788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.426988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.427017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.427253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.427282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.427483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.427512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.427857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.427889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.428229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.428265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.428629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.536 [2024-11-06 15:41:59.428658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.536 qpair failed and we were unable to recover it. 00:29:41.536 [2024-11-06 15:41:59.428915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.428949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.429297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.429327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.429701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.429730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.430077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.430108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.430471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.430500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.430866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.430897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.431130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.431160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.431518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.431548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.431779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.431809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.432144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.432173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.432417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.432445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.432820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.432851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.433107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.433135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.433477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.433506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.433870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.433901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.434260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.434289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.434491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.434520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.434738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.434780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.435031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.435061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.435303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.435331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.435441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.435473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.435812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.435843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.436185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.436213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.436665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.436693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.437108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.437138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.437548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.437577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.437794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.437823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.438218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.438248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.438658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.438687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.438950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.438979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.537 [2024-11-06 15:41:59.439159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.537 [2024-11-06 15:41:59.439188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.537 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.439411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.439443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.439852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.439882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.440233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.440263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.440350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.440377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.440699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.440729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.441094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.441125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.441475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.441504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.441826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.441861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.442220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.442249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.442614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.442641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.442861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.442894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.443170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.443197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.443547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.443575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.444004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.444034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.444365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.444394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.444614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.444644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.445000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.445030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.445374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.445403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.445770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.445800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.446019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.446048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.446158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.446188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.446551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.446580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.446783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.446812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.447129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.447157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.447515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.447543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.447765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.447797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.448046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.448075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.448446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.448474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.448826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.448854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.449207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.449236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.449599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.449628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.449991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.450019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.450336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.450365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.450725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.450761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.451003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.451032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.451361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.451389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.451764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.538 [2024-11-06 15:41:59.451794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.538 qpair failed and we were unable to recover it. 00:29:41.538 [2024-11-06 15:41:59.452166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.452196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.452409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.452437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.452775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.452805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.453037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.453069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.453162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.453188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.453453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.453480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.453717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.453755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.454115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.454144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.454503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.454532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.454905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.454934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.455276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.455312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.455666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.455694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.456142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.456172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.456383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.456411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.456782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.456811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.457014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.457042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.457344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.457373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.457724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.457760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.457984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.458011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.458277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.458305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.458522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.458550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.458868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.458898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.459258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.459286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.459634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.459662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.460085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.460116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.460206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.460233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.460580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.460611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.460975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.461004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.461365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.461393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.461763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.461793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.461993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.462021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.462213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.462240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.462444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.462473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.462802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.462831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.462966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.462995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.463329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.463358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.463802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.539 [2024-11-06 15:41:59.463831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.539 qpair failed and we were unable to recover it. 00:29:41.539 [2024-11-06 15:41:59.464198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.464227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.464588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.464616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.464977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.465006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.465381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.465410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.465766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.465794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.466159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.466187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.466425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.466452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.466683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.466711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.467129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.467160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.467505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.467534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.467798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.467827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.468177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.468206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.468569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.468598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.468818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.468853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.469196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.469225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.469590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.469618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.469848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.469877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.470258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.470287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.470613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.470641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.470982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.471011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.471275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.471304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.471645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.471673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.472029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.472058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.472420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.472449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.472814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.472844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.473072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.473103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.473333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.473361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.473571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.473600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.473934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.473964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.474185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.474213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.474591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.474620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.475017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.475047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.475155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.475185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.475509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.475541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.475906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.475940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.476303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.476332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.476677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.476706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.477075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.477104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.540 qpair failed and we were unable to recover it. 00:29:41.540 [2024-11-06 15:41:59.477445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.540 [2024-11-06 15:41:59.477474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.477831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.477860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.478226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.478257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.478467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.478496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.478703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.478731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.479046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.479075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.479283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.479311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.479687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.479715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.480092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.480122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.480465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.480494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.480718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.480754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.481095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.481124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.481480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.481509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.481857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.481886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.482242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.482271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.482631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.482666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.483079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.483108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.483533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.483561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.483913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.483942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.484297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.484326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.484555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.484583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.484900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.484928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.485268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.485296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.485559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.485586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.485918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.485947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.486143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.486171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.486372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.486400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.486609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.486637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.486727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.486763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.486911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.486940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.541 [2024-11-06 15:41:59.487261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.541 [2024-11-06 15:41:59.487289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.541 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-06 15:41:59.487554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-06 15:41:59.487584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-06 15:41:59.487953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-06 15:41:59.487986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-06 15:41:59.488369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-06 15:41:59.488397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-06 15:41:59.488763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-06 15:41:59.488794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-06 15:41:59.489149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-06 15:41:59.489178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-06 15:41:59.489540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-06 15:41:59.489568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-06 15:41:59.489917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-06 15:41:59.489948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-06 15:41:59.490279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-06 15:41:59.490309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-06 15:41:59.490670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-06 15:41:59.490699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-06 15:41:59.490935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-06 15:41:59.490964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.491310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.491339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.491573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.491605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.491919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.491949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.492286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.492315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.492565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.492597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.492829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.492859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.493092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.493120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.493337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.493364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.493600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.493628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.494008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.494037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.494258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.494286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.494515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.494547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.494912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.494941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.495168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.495196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.495421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.495455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.495771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.495801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.496025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.496053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.496413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.496441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.496815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.496845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.497272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.497302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.497734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.497771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.498133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-06 15:41:59.498161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-06 15:41:59.498412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.498440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.498806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.498836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.499266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.499295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.499656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.499683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.500050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.500079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.500448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.500476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.500844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.500874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.501233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.501261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.501622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.501650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.501991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.502021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.502326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.502353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.502738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.502774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.503143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.503172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.503392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.503424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.503629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.503657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.503998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.504029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.504242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.504272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.504622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.504651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.505011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.505041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.505294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.505324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.505666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.505695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.506057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.506086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.506441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.506470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.506680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.506709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.507041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.507071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.507319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.507351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.507567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.507598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.507889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-06 15:41:59.507920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-06 15:41:59.508243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.508282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.508509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.508540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.508769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.508801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.509201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.509230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.509443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.509479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.509726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.509762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.510107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.510135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.510486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.510515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.510864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.510894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.511241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.511270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.511678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.511706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.511969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.511999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.512221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.512249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.512617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.512646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.512742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.512782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.513110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.513204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.513641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.513677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.514018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.514115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.514445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.514482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.514822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.514854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.515165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.515194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.515416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.515444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.515822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.515852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.516229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.516257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-06 15:41:59.516607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-06 15:41:59.516635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.516984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.517014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.517417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.517446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.517792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.517824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.518180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.518209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.518447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.518475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.518866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.518896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.519277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.519306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.519542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.519569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.519832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.519863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.520269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.520298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.520668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.520696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.521063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.521093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.521468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.521496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.521824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.521853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.522236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.522264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.522485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.522518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.522897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.522927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.523303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.523332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.523784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.523814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.524049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.524083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.524437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.524466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.524834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.524865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.525229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.525258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.525504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.525532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.525741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.525783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.526013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.526045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.526168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.526196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.526542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.526572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.526946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.526977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.527225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.527256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.527623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.527652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.528012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.528042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.528254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.528283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.528517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.528545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-06 15:41:59.528888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-06 15:41:59.528919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.529269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.529298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.529647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.529677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.530019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.530049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.530412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.530440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.530698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.530727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.531092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.531121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.531474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.531503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.531906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.531937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.532267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.532297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.532528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.532560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.532922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.532951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.533317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.533347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.533716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.533753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.534187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.534215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.534566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.534595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.534967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.534998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.535355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.535383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.535742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.535782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.536139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.536169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.536512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.536541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.536907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.536938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.537149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.537177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.537506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.537535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.537627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.537654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.537854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.537889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.538007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.538034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.538252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.538279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.538386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.538419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.538754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.538783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.538966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.538995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.539211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.539242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.539592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.539621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.540004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.540034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.540243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.540271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.540593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.540622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.540986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.541017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.541234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.541264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.541464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.541492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.541866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.541897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.542265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.542294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.542651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.542681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.543029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.543059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.543431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.543459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.543800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.543835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.544060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.544089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.544307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.544335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.544538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.544570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.544782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.544812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-06 15:41:59.545061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-06 15:41:59.545089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.545330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.545361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.545586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.545614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.545849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.545880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.546288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.546316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.546675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.546704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.547111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.547141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.547484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.547513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.547743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.547784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.548137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.548166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.548388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.548415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.548797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.548829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.549157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.549188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.549429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.549458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.549693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.549720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.550043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.550072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.550466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.550495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.550850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.550881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.551237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.551264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.551641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.551670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.552037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.552066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.552301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.552329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.552720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.552758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.552981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.553009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.553209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.553237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.553435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.553463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.553798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.553828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.554217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.554247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.554454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.554482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.554828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.554858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.555091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.555119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.555469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.555497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.555709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.555737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.555941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.555973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.556363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.556392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.556526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.556553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.556910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.556940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.557054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.557083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.557594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.557686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.558077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.558173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.558463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.558505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.558636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.558665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.558998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.559029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.559205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.559245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.559490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.559519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.559742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.559781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.560153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.560182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.560381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.560409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.560783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.560815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.561182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.561210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.561594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.561623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.561878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.561916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.562282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.562311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.562682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.562712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.562946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.562976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.563334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.563362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.563739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.563777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.564021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.564049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.837 qpair failed and we were unable to recover it. 00:29:41.837 [2024-11-06 15:41:59.564414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.837 [2024-11-06 15:41:59.564443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.564575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.564603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.564949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.564980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.565347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.565376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.565631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.565658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.565989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.566019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.566381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.566410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.566631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.566659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.566994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.567024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.567365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.567393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.567609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.567638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.567889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.567918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.568121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.568151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.568514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.568542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.568809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.568840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.569209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.569237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.569616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.569644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.569997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.570027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.570272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.570300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.570631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.570659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.570991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.571022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.571355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.571384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.571625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.571653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.571854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.571884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.572241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.572268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.572576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.572611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.572846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.572880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.573248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.573278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.573524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.573552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.573910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.573940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.574154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.574182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.574538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.574567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.574898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.574929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.575239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.575267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.575621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.575650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.576009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.576039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.576400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.576429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.576796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.576828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.577182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.577212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.577586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.577615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.577980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.578010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.578369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.578398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.578784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.578814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.579155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.579184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.579621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.579650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.580001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.580031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.580356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.580384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.580741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.580781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.581139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.581169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.581540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.581569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.581695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.581727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.581952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.581980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.582298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.582327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.582530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.582558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.582718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.582757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.583112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.583141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.583512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.583541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.583783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.583814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.584061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.584089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.584323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.584351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.584712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.584742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.585111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.585140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-11-06 15:41:59.585517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.838 [2024-11-06 15:41:59.585545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.585770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.585801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.585942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.585970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.586377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.586412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.586756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.586786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.587042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.587075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.587162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.587190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.587606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.587635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.587766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.587795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.588034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.588063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.588291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.588321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.588662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.588691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.588787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.588815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.589378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.589471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.589921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.589963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.590330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.590361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.590593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.590621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.591079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.591174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.591621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.591657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.591876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.591909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.592258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.592288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.592654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.592682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.592936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.592966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.593215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.593244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.593600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.593628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.594002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.594034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.594292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.594322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.594662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.594691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.595098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.595128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.595478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.595510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.595864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.595905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.596313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.596343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.596701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.596732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.597105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.597134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.597515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.597544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.597928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.597959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.598310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.598339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.598586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.598614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.598866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.598898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.599289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.599317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.599701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.599730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.600109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.600139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.600476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.600505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.600865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.600895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.601237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.601267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.601361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.601389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.601759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.601790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.602147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.602176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.602346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.602374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.602620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.839 [2024-11-06 15:41:59.602648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-11-06 15:41:59.602907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.602938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.603274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.603304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.603560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.603588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.603935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.603965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.604176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.604205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.604591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.604620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.604847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.604878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.605119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.605153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.605513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.605543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.605983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.606012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.606235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.606264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.606471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.606501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.606775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.606805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.607147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.607177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.607429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.607458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.607802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.607830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.608064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.608094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.608418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.608447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.608669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.608697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.609068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.609097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.609459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.609487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.609828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.609860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.610196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.610226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.610633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.610663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.610883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.610913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.611257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.611286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.611644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.611672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.612030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.612058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.612447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.612476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.612822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.612852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.613051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.613080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.613322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.613352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.613729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.613766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.614124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.614155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.614464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.614505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.614840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.614871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.615216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-11-06 15:41:59.615245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-11-06 15:41:59.615480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.615509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.615876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.615906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.616080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.616109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.616450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.616480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.616844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.616874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.617230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.617259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.617640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.617669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.617877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.617906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.618119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.618148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.618493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.618521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.618861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.618891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.619269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.619299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.619638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.619668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.619877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.619906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.620259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.620288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.620653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.620682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.621144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.621173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.621506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.621535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.621951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.621982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.622310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.622340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.622565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.622593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.622926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.622965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.623234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.623264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.623563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.623593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.623939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.623969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.624298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.624327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.624538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.624566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.624894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.624923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.625332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.625360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.625576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.625604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.625936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.625966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.626192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.626220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.626564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.626593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.626970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.627001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.627374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.627402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.627763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.627794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.628135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.628164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.628488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.628517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.628873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.628903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.629111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.629139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.629496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.629524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.629780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.629810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.630075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.630110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.630394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.630423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.630770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.630800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.631022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.631052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.631434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.631464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-11-06 15:41:59.631688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-11-06 15:41:59.631717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.632108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.632138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.632492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.632522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.632855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.632885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.633217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.633245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.633500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.633530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.633730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.633768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.634035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.634064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.634460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.634489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.634754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.634784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.634900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.634932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.635335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.635426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.636024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.636115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.636268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.636302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.636452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.636485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.636851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.636883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.637103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.637135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.637509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.637538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.637668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.637700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.638101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.638133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.638395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.638423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.638779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.638809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.639141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.639169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.639525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.639555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.639915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.639945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.640301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.640331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.640677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.640707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.641075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.641106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.641461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.641493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.641599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.641632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.641913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.641944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.642172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.642213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.642503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.642533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.642960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.642991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.643318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.643348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.643670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.643700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.644041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-11-06 15:41:59.644072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-11-06 15:41:59.644447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.644476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.644831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.644862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.645251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.645280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.645659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.645688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.646120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.646150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.646490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.646520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.646743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.646783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.647134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.647162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.647281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.647312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.647650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.647679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.647855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.647885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.648137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.648166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.648377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.648406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.648773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.648803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.649160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.649189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.649412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.649441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.649656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.649683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.650022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.650052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.650301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.650330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.650677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.650705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.650801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.650829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.651362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.651465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.652068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.652162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.652595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.652632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.653142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.653233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.653667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.653703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.654091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.654123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.654461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.654491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.654724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.654760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.655135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.655164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.655519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.655547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.655924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.655954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.656298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.656327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.656589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.656617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.656868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.656898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.657141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.657170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.657544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.657573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-11-06 15:41:59.657869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-11-06 15:41:59.657898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.658139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.658173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.658558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.658586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.658738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.658775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.659103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.659132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.659493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.659522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.659881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.659911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.660280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.660310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.660631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.660659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.661043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.661073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.661433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.661461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.661698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.661731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.662069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.662099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.662333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.662362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.662691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.662719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.662900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.662929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.663145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.663173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.663517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.663545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.663755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.663785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.664051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.664080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.664314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.664341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.664702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.664730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.665103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.665132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.665352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.665381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.665732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.665776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.665993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.666021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.666413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.666442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.666803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.666833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.667192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.667227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.667586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.667615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.668041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.668071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.668423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.668452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.668812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.668842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.669215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.669244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.669598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.669627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.669969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.669999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.670228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.670258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.670575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.670603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.670861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.670890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.671141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.671174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.671384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.671412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-11-06 15:41:59.671613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-11-06 15:41:59.671641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.671899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.671930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.672258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.672287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.672664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.672694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.673050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.673081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.673427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.673455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.673682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.673711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.673932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.673962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.674323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.674353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.674604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.674633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.674963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.674993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.675413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.675442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.675799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.675829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.676186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.676223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.676447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.676476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.676794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.676826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.677161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.677190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.677396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.677424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.677666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.677695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.678029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.678060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.678305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.678338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.678676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.678705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.679086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.679116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.679473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.679508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.679729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.679765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.680155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.680184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.680524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.680553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.680919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.680948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.681282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.681311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.681520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.681551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.681642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.681668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.682023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.682053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.682271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.682300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.682414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.682445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.682790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.682820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.683205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.683234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.683361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.683388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.683657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.683685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.683905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.683935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.684190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.684217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.684597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.684625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.684845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.684875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.685133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.685161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.685515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.685543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.685910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.685940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.686298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.686325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.686560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.686588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.686924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.686954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.687314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.687343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.687693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.687722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.688088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.688118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.688474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.688502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.688884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.688914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.689150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.689178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-11-06 15:41:59.689520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-11-06 15:41:59.689548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.689937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.689967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.690173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.690202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.690429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.690458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.690673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.690700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.691039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.691068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.691416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.691444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.691808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.691838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.692201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.692230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.692667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.692702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.693072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.693102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.693466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.693495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.693714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.693742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.694062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.694090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.694193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.694222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.694474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.694501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.694837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.694867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.695232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.695260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.695611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.695640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.695885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.695917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.696247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.696276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.696493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.696524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.696725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.696760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.697028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.697061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.697410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.697438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.697652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.697681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.698039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.698068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.698469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.698498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-11-06 15:41:59.698837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-11-06 15:41:59.698874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.699245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.699274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.699634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.699662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.700008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.700039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.700380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.700409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.700770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.700801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.701045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.701073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.701396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.701425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.701791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.701822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.702194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.702222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.702442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.702470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.702807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.702835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.703191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.703220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.703573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.703601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.703963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.703993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.704222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.704252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.704448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.704476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.704828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.704858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.705195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.705224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.705566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.705614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.705960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.705990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.706334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.706369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.706721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.706757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.707113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.707143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.707490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.707519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.707876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.707905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.708143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.708173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.708518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.708547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.708778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.708809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.709168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.709197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.709548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.709577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.709916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.709946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.710326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.710355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.710712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.710741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.711145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.711177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.711401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.711434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.711752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.711782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.712143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-11-06 15:41:59.712173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-11-06 15:41:59.712399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.712427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.712799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.712829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.713165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.713194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.713615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.713644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.713992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.714021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.714242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.714274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.714628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.714657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.714869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.714901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.715252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.715280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.715641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.715670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.716037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.716066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.716415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.716443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.716801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.716831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.717084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.717115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.717452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.717481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.717705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.717732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.718077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.718106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.718465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.718494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.718854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.718883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.719242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.719271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.719628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.719656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.720022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.720053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.720272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.720300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.720660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.720695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.721052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.721082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.721511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.721540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.721889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.721919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.722127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.722158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.722515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.722544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.722815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.722844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.723052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.723081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.723446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.723474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.723817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.723847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.724189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.724218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.724425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.724457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.724699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.724728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-11-06 15:41:59.724947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-11-06 15:41:59.724976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.725347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.725377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.725736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.725774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.725987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.726015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.726397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.726426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.726782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.726812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.727177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.727205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.727542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.727570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.727963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.727993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.728436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.728465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.728819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.728849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.729196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.729225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.729554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.729582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.729833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.729862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.730213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.730242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.730592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.730620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.730965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.730996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.731359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.731388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.731728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.731762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.732108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.732136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.732345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.732373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.732570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.732598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.732831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.732860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.732952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.732979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.733107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.733134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.733497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.733525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.733900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.733931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.734291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.734326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.734687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.734716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.735087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.735117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.735461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.735489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.735856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.735885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.736222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.736252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.736617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.736645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.737018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.737049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.737401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.737431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.737624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.737653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.737931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.737960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.738186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.738215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.738570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-11-06 15:41:59.738598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-11-06 15:41:59.738814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.738843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.738944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.738973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.739337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.739367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.739601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.739629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.739870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.739900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.740116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.740148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.740490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.740519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.740762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.740790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.741140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.741168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.741590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.741618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.741962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.741992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.742220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.742248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.742614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.742642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.743009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.743038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.743390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.743420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.743787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.743816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.744168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.744196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.744542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.744571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.744919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.744949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.745305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.745333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.745548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.745576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.745792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.745822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.746042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.746071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.746287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.746315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.746656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.746685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.746895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-11-06 15:41:59.746927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-11-06 15:41:59.747277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-11-06 15:41:59.747306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-11-06 15:41:59.747672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-11-06 15:41:59.747707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-11-06 15:41:59.748057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-11-06 15:41:59.748086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-11-06 15:41:59.748285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-11-06 15:41:59.748314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-11-06 15:41:59.748535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-11-06 15:41:59.748563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-11-06 15:41:59.748918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-11-06 15:41:59.748948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-11-06 15:41:59.749312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-11-06 15:41:59.749341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-11-06 15:41:59.749687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-11-06 15:41:59.749714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-11-06 15:41:59.750044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-11-06 15:41:59.750074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-11-06 15:41:59.750437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-11-06 15:41:59.750466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-11-06 15:41:59.750823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-11-06 15:41:59.750853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-11-06 15:41:59.751203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-11-06 15:41:59.751231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-11-06 15:41:59.751596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-11-06 15:41:59.751624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-11-06 15:41:59.751866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-11-06 15:41:59.751895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-11-06 15:41:59.752134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-11-06 15:41:59.752162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-11-06 15:41:59.752427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-11-06 15:41:59.752459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-11-06 15:41:59.752825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-11-06 15:41:59.752855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-11-06 15:41:59.753063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-11-06 15:41:59.753091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-11-06 15:41:59.753446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-11-06 15:41:59.753475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-11-06 15:41:59.753827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-11-06 15:41:59.753857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.754077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.754105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.754461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.754490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.754858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.754888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.755256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.755284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.755494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.755522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.755760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.755791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.756033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.756062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.756410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.756439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.756672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.756702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.757109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.757139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.757492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.757520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.757897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.757927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.758270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.758299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.758654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.758683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.759037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.759067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.759409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.759437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.759790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.759819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.760142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.760171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.760406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.760435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.760829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.760859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.761066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.761093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.761299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.761334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.761684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.761712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.762043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.762073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.762161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.762188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.762425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.762453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.762762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.762791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.763206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.763235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.763596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-11-06 15:41:59.763625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-11-06 15:41:59.763983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.764014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.764340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.764369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.764713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.764742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.765088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.765117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.765469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.765498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.765855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.765885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.766249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.766277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.766636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.766664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.766995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.767026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.767275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.767306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.767707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.767735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.768142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.768171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.768378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.768405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.768781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.768811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.769004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.769032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.769276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.769306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.769530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.769558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.769925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.769954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.770051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.770080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.770176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.770204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.770610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.770640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.770978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.771009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.771345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.771374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.771594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.771623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.771838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.771869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.772210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.772239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.772635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.772664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.772895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.772923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.773270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.773298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.773675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.773704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.773963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.773992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.774192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.854 [2024-11-06 15:41:59.774220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.854 qpair failed and we were unable to recover it. 00:29:41.854 [2024-11-06 15:41:59.774562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.855 [2024-11-06 15:41:59.774597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.855 qpair failed and we were unable to recover it. 00:29:41.855 [2024-11-06 15:41:59.775008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.855 [2024-11-06 15:41:59.775038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.855 qpair failed and we were unable to recover it. 00:29:41.855 [2024-11-06 15:41:59.775365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.855 [2024-11-06 15:41:59.775394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.855 qpair failed and we were unable to recover it. 00:29:41.855 [2024-11-06 15:41:59.775641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.855 [2024-11-06 15:41:59.775674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.855 qpair failed and we were unable to recover it. 00:29:41.855 [2024-11-06 15:41:59.775888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.855 [2024-11-06 15:41:59.775917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.855 qpair failed and we were unable to recover it. 00:29:41.855 [2024-11-06 15:41:59.776157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.855 [2024-11-06 15:41:59.776189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.855 qpair failed and we were unable to recover it. 00:29:41.855 [2024-11-06 15:41:59.776423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.855 [2024-11-06 15:41:59.776451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.855 qpair failed and we were unable to recover it. 00:29:41.855 [2024-11-06 15:41:59.776800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.855 [2024-11-06 15:41:59.776831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.855 qpair failed and we were unable to recover it. 00:29:41.855 [2024-11-06 15:41:59.777063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.855 [2024-11-06 15:41:59.777091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.855 qpair failed and we were unable to recover it. 00:29:41.855 [2024-11-06 15:41:59.777458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.855 [2024-11-06 15:41:59.777487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.855 qpair failed and we were unable to recover it. 00:29:41.855 [2024-11-06 15:41:59.777842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.855 [2024-11-06 15:41:59.777871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.855 qpair failed and we were unable to recover it. 00:29:41.855 [2024-11-06 15:41:59.778231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.855 [2024-11-06 15:41:59.778258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.855 qpair failed and we were unable to recover it. 00:29:41.855 [2024-11-06 15:41:59.778603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.855 [2024-11-06 15:41:59.778632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.855 qpair failed and we were unable to recover it. 00:29:41.855 [2024-11-06 15:41:59.778993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.855 [2024-11-06 15:41:59.779023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.855 qpair failed and we were unable to recover it. 00:29:41.855 [2024-11-06 15:41:59.779383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.779412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.779772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.779803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.780143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.780173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.780535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.780563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.780820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.780849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.781280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.781310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.781640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.781669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.781919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.781949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.782313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.782342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.782552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.782580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.782917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.782947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.783309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.783338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.783681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.783708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.783937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.783968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.784068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.784098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.784224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.784250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.784592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.784620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.784968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.784998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.785351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.785380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.785735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.785773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.786149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.786178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.786609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.786639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.786850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.786879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.787146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.787173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.787394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.787426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.787785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.787814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.788138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.788172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.856 qpair failed and we were unable to recover it. 00:29:41.856 [2024-11-06 15:41:59.788376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.856 [2024-11-06 15:41:59.788404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.857 qpair failed and we were unable to recover it. 00:29:41.857 [2024-11-06 15:41:59.788605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.857 [2024-11-06 15:41:59.788634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.857 qpair failed and we were unable to recover it. 00:29:41.857 [2024-11-06 15:41:59.788928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.857 [2024-11-06 15:41:59.788958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:41.857 qpair failed and we were unable to recover it. 00:29:41.857 [2024-11-06 15:41:59.789317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.789346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.789676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.789706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.790115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.790145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.790354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.790382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.790754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.790784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.791138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.791167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.791376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.791403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.791776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.791806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.792049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.792077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.792309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.792338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.792706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.792734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.793103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.793132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.793474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.793503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.793875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.793905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.794261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.794290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.794377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.794404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.794637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.794665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.795009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.795039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.795372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.795402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.795620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.795648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.795976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.796007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.796379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.796408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.796627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.796654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.797037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.797068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.797288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.797316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.797534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-06 15:41:59.797563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-06 15:41:59.797940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.797971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.798315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.798344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.798702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.798730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.798963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.798992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.799303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.799331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.799542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.799569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.799814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.799844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.800184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.800214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.800446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.800474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.800896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.800926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.801304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.801339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.801573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.801601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.801925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.801954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.802205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.802236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.802583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.802612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.802826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.802855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.803067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.803096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.803455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.803483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.803687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.803716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.804002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.804031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.804384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.804413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.804630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.804661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.805087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.805118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.805320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.805349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.805447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.805476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.805761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.805790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.806189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.806217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.806547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.806575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.806959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.806988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.807318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.807347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.807688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.807717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.807953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.807982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.808249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.808276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.808620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.808649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.809016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.809046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.809403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.809433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.809672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.809701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.810073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.810103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.810463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.810493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-06 15:41:59.810834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-06 15:41:59.810864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.811200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.811229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.811445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.811473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.811789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.811827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.812198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.812226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.812590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.812619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.812859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.812891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.813024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.813051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.813373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.813401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.813818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.813848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.814185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.814215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.814602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.814636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.814977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.815007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.815357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.815386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.815707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.815736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.816099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.816128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.816403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.816437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.816796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.816827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.817048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.817075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.817392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.817421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.817662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.817690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.818076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.818106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.818347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.818378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.818578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.818608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.818946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.818975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.819204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.819233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.819475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.819504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.819861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.819891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.820218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.820247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.820575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.820604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.820965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.820996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.821397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.821426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.821791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.821820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.822153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.822183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.822590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.822619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.822982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.823011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.823220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.823250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.823621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.823650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.823907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-06 15:41:59.823941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-06 15:41:59.824174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.824204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.824307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.824337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.824696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.824725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.825092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.825123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.825445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.825475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.825709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.825738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.826069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.826099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.826457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.826486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.826838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.826868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.827308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.827337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.827688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.827718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.828126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.828156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.828508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.828536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.828917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.828948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.829303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.829332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.829698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.829727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.830053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.830082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.830171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.830199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea28000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.830712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.830825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.831243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.831280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.831499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.831528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.831763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.831795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.832232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.832321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.832590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.832627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.832986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.833019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.833364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.833393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.833618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.833647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.833971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.834001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.834347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.834376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.834709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.834737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.835079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.835108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.835336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.835364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.835667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.835697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.835966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.835996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.836341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.836370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.836728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.836768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.837026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.837054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.837259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.837288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-06 15:41:59.837442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-06 15:41:59.837470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.837931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.837962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.838331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.838360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.838608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.838636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.838975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.839005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.839214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.839241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.839481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.839516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.839761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.839790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.840156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.840186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.840552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.840581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.840912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.840942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.841183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.841216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.841576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.841606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.841966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.841997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.842341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.842370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.842769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.842801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.843120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.843149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.843493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.843522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.843883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.843941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.844301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.844332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.844675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.844704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.845131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.845161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.845390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.845423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.845651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.845681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.846037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.846067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.846291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.846320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.846406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.846433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.846712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.846811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.847136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.847225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.847498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.847536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.847742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.847795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.847907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.847940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.848324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.848354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-06 15:41:59.848689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-06 15:41:59.848719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.848934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.848965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.849310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.849337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.849706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.849736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.850097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.850127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.850473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.850502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.850862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.850894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.851111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.851143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.851493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.851523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.851878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.851909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.852256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.852284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.852670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.852698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.853043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.853074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.853282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.853311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.853667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.853696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.854042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.854072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.854400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.854429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.854810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.854841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.855181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.855210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.855541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.855571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.855918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.855948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.856293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.856322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.856701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.856737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.857079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.857108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.857555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.857584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.857962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.857993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.858332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.858361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.858723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.858763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.859108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.859136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.859516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.859545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.859780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.859813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.860035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.860063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.860384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.860414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.860612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.860640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.861002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.861034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.861272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.861304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.861653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.861683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.862041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.862072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-06 15:41:59.862432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-06 15:41:59.862461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.862806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.862837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.863202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.863232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.863483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.863512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.863718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.863757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.864122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.864152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.864507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.864536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.864622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.864649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.864872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.864950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.865175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.865212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.865441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.865472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea1c000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.865900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.865940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.866288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.866318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.866678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.866706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.867079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.867110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.867448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.867478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.867856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.867886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.868271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.868301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.868643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.868673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.869032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.869063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.869390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.869419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.869641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.869670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.870003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.870033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.870256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.870285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.870626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.870663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.871011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.871042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.871421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.871450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.871813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.871843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.871942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.871971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.872268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.872296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.872632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.872661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.872892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.872922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.873305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.873334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.873688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.873717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.874081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.874111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.874447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.874476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.874832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.874863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.875208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.875236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-06 15:41:59.875448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-06 15:41:59.875476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.875714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.875753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.876196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.876226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.876556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.876585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.876847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.876879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.877289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.877318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.877652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.877681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.877801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.877831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.878041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.878069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.878405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.878435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.878762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.878792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.879022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.879050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.879281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.879310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.879529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.879560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.879903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.879933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.880164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.880192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.880530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.880560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.880939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.880971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.881412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.881441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.881779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.881810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.882129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.882159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.882522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.882551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.882912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.882942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.883148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.883177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.883383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.883413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.883644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.883674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.884097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.884133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.884464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.884501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.884863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.884894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.885254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.885283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.885625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.885655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.885989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.886019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.886244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.886273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.886612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.886641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.886988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.887018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.887371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.887399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.887743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.887780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.888123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.888152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.888371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-06 15:41:59.888399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-06 15:41:59.888767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.888798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.889143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.889174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.889505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.889534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.889920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.889951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.890144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.890173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.890513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.890543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.890792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.890822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.891027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.891057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.891401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.891430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.891643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.891673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.891889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.891919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.892281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.892310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.892674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.892705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.893046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.893078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.893308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.893341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.893577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.893607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.893972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.894003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.894332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.894361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.894573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.894601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.894848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.894878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.895109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.895137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.895389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.895416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.895627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.895656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.895995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.896025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.896346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.896375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.896729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.896775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.897004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.897033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.897253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.897289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.897530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.897559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.897648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.897675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.897904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.897935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.898182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.898211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.898570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.898600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.898950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.898980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.899340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.899369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.899741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.899779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.899988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.900015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.900377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.900406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.140 [2024-11-06 15:41:59.900758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.140 [2024-11-06 15:41:59.900788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.140 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.901141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.901169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.901403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.901432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.901791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.901822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.902172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.902201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.902555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.902585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.902955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.902984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.903217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.903246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.903569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.903597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.903960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.903990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.904153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.904184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.904393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.904421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.904621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.904649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.904879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.904911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.905258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.905289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.905541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.905569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.905814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.905845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.906264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.906295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.906664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.906693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.907055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.907085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.907167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.907195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.907515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.907544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.907752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.907783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.908023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.908050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.908255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.908284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.908632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.908660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.908871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.908901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.909249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.909277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.909649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.909677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.909903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.909939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.910390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.910419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.910631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.910659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.911003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.911033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.911187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.911219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.141 [2024-11-06 15:41:59.911315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.141 [2024-11-06 15:41:59.911343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.141 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.911678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.911708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.911921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.911952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.912281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.912310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.912661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.912691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.912943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.912974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.913339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.913369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.913705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.913733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.914103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.914134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.914472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.914503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.914862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.914892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.915221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.915250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.915616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.915645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.915979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.916008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.916362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.916390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.916593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.916622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.917040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.917070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.917420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.917449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.917664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.917693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.917937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.917969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.918180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.918210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.918538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.918566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.918940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.918971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.919193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.919222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.919457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.919485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.919715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.919743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.919992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.920020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.920363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.920391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.920779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.920810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.921142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.921172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.921518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.921547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.921795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.921823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.922192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.922220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.922578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.922606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.922818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.922848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.922956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.922992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.923308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.923338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.923687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.923716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.923807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.142 [2024-11-06 15:41:59.923835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.142 qpair failed and we were unable to recover it. 00:29:42.142 [2024-11-06 15:41:59.924074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.924101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.924317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.924346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.924584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.924616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.924819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.924848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.925054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.925083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.925312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.925339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.925683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.925711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.926065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.926094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.926441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.926468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.926702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.926731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.927102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.927132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.927476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.927502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.927709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.927737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.928088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.928118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.928339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.928368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.928693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.928721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.929086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.929117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.929454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.929482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.929860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.929890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.930243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.930271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.930526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.930554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.930920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.930950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.931312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.931340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.931697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.931725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.932103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.932132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.932486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.932514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.932880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.932910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.933276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.933304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.933515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.933544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.933908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.933938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.934159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.934187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.934526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.934555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.934976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.935005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.935338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.935367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.935578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.935608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.935845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.935873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.936145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.936179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.936501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.936531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.143 [2024-11-06 15:41:59.936757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.143 [2024-11-06 15:41:59.936787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.143 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.937024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.937052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.937373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.937402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.937580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.937608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.937770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.937799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.938030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.938063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.938184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.938216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.938550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.938578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.938919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.938949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.939304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.939332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.939526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.939555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.939918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.939948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.940307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.940336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.940657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.940685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.941047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.941077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.941438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.941465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.941792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.941823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.942155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.942184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.942537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.942566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.942821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.942850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.943221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.943251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.943610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.943639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.943980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.944010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.944268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.944296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.944642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.944670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.945038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.945079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.945312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.945345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.945581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.945609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.945807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.945838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.946058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.946087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.946429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.946457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.946826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.946855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.947052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.947079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.947448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.947477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.947864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.947894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.948242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.948270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.948518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.948547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.948807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.948839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.949185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.144 [2024-11-06 15:41:59.949214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.144 qpair failed and we were unable to recover it. 00:29:42.144 [2024-11-06 15:41:59.949572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.949601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.949819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.949851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.950238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.950267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.950394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.950421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.950636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.950664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.950961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.950994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.951372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.951401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.951736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.951773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.952000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.952029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.952237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.952265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.952498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.952527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.952897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.952928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.953280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.953309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.953658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.953688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.954047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.954077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.954298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.954327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.954543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.954574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.954920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.954950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.955280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.955310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.955647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.955676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.955943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.955972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.956165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.956193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.956387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.956414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.956617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.956646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.956995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.957025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.957285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.957313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.957518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.957553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.957920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.957950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.958276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.958305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.958566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.958595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.958951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.958982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.959337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.959365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.959736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.959772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.960155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.960185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.145 qpair failed and we were unable to recover it. 00:29:42.145 [2024-11-06 15:41:59.960528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.145 [2024-11-06 15:41:59.960557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.960903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.960932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.961307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.961336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.961597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.961625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.961833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.961863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.962189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.962218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:42.146 [2024-11-06 15:41:59.962626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.962655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:42.146 [2024-11-06 15:41:59.962996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.963027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:42.146 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:42.146 [2024-11-06 15:41:59.963382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.963411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.146 [2024-11-06 15:41:59.963626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.963654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.963862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.963891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.964118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.964146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.964411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.964439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.964639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.964668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.965064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.965094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.965428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.965457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.965776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.965805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.966164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.966194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.966606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.966634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.967056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.967087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.967305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.967333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.967562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.967595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.967972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.968003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.968207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.968236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.968598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.968627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.968971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.969000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.969258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.969287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.969386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.969417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.969656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.969685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.970125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.970154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.970390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.970418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.970503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.970530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.970862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.970893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.971123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.971151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.971485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.971513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.971860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.971889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.146 [2024-11-06 15:41:59.972245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.146 [2024-11-06 15:41:59.972275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.146 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.972615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.972643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.973019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.973049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.973417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.973446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.973804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.973832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.974202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.974231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.974584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.974612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.974931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.974966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.975303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.975332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.975688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.975716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.976126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.976157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.976497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.976525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.976883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.976915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.977048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.977074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.977407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.977436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.977775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.977806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.978149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.978179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.978400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.978431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.978635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.978664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.979015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.979044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.979427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.979455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.979544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.979571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.979800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.979829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.980180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.980209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.980560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.980590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.980841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.980873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.981250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.981278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.981601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.981629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.981978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.982009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.982361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.982388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.982712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.982741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.982952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.982979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.983354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.983383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.983731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.983771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.984135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.984164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.984508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.984537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.984780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.984810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.985185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.985215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.985570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.147 [2024-11-06 15:41:59.985599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.147 qpair failed and we were unable to recover it. 00:29:42.147 [2024-11-06 15:41:59.985960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.985991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.986322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.986351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.986578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.986606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.986848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.986877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.987215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.987244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.987464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.987492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.987820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.987850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.988208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.988237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.988441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.988470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.988835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.988866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.989191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.989220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.989582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.989611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.989825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.989855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.990223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.990252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.990482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.990514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.990820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.990851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.991174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.991203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.991569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.991598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.991822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.991851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.992240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.992269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.992594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.992623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.992981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.993011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.993272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.993300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.993688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.993716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.993948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.993977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.994187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.994216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.994323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.994354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.994701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.994730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.995093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.995123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.995528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.995557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.995899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.995929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.996140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.996170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.996413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.996444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.996562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.996590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.996752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.996781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.997118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.997154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.997245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.997272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.997625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.148 [2024-11-06 15:41:59.997654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.148 qpair failed and we were unable to recover it. 00:29:42.148 [2024-11-06 15:41:59.998072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:41:59.998102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:41:59.998334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:41:59.998362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:41:59.998799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:41:59.998830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:41:59.999175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:41:59.999205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:41:59.999424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:41:59.999453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:41:59.999784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:41:59.999816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.000055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.000084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.000446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.000474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:42.149 [2024-11-06 15:42:00.000815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.000846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.001067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.001096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.149 [2024-11-06 15:42:00.001579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.001609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 15:41:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.149 [2024-11-06 15:42:00.001959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.001990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.002215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.002245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.002489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.002521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.002765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.002800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.003451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.003486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.003928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.003960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.004393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.004422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.004737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.004776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.005028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.005057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.005278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.005307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.005566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.005598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.005734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.005772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.005888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.005916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.006080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.006108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.006446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.006476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.006730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.006766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.007091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.007121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.007396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.007425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.007664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.007694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.007957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.007987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.008297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.008327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.008427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.008457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.008558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.008590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fea20000b90 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.008770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.008857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.009045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.009089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.149 [2024-11-06 15:42:00.009488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.149 [2024-11-06 15:42:00.009518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.149 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.009892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.009927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.010157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.010186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.010315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.010346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.010708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.010736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.011116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.011145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.011403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.011433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.011679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.011708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.012154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.012185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.012530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.012560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.012932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.012962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.013339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.013369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.013610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.013640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.013902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.013933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.014265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.014293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.014522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.014553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.014803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.014832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.015192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.015221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.015580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.015610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.015982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.016012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.016350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.016379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.016639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.016669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.017031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.017061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.017296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.017324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.017673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.017704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.018047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.018078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.018432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.018466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.018926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.018958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.019205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.019232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.019422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.019451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.019796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.019824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.020179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.020208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.020457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.020485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.020709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.020737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.021181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.021213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.150 qpair failed and we were unable to recover it. 00:29:42.150 [2024-11-06 15:42:00.021618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.150 [2024-11-06 15:42:00.021647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.021994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.022024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.022370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.022399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.022624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.022651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.022892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.022922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.023134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.023164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.023489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.023520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.023611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.023639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.023959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.023989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.024309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.024339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.024579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.024608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.024869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.024898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.025141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.025170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.025376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.025405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.025633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.025661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.025911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.025941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.026303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.026331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.026561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.026589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.026941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.026972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.027324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.027353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.027597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.027625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.027886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.027915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.028070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.028099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.028588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.028616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.028864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.028894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.029007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.029035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.029253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.029281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.029606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.029635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.029891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.029920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.030349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.030377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.030616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.030645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.030918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.030947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.031375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.031405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.031762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.031791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.032153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.032181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.032423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.032451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.032792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.032822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.033147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.033175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.033414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.151 [2024-11-06 15:42:00.033444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.151 qpair failed and we were unable to recover it. 00:29:42.151 [2024-11-06 15:42:00.033791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.033821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.034056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.034083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.034390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 Malloc0 00:29:42.152 [2024-11-06 15:42:00.034418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.034783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.034814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.035129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.035157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.152 [2024-11-06 15:42:00.035530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:42.152 [2024-11-06 15:42:00.035561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.152 [2024-11-06 15:42:00.035903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.035934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.152 [2024-11-06 15:42:00.036298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.036326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.036697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.036725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.037078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.037107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.037463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.037491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.037857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.037886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.038246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.038275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.038538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.038571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.038938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.038970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.039297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.039326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.039705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.039734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.040076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.040106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.040337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.040378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.040723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.040762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.040963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.040991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.041232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.041263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.041496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.041524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.041656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.041688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.041801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.041829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.041900] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.152 [2024-11-06 15:42:00.042054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.042083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.042276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.042303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.042674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.042703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.043000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.043031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.043241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.043269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.043468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.043496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.043606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.043633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.043966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.043997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.044378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.044407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.044652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.044681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.044782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.044811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.152 [2024-11-06 15:42:00.044908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.152 [2024-11-06 15:42:00.044937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.152 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.045253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.045281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.045475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.045503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.045709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.045737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.046024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.046053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.046391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.046419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.046784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.046814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.047168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.047197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.047547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.047576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.047956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.047993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.048350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.048378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.048763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.048793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.049156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.049184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.049540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.049569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.049932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.049961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.050153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.050181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.050396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.050425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.050677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.050705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.153 [2024-11-06 15:42:00.051074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.051104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:42.153 [2024-11-06 15:42:00.051320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.051349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.051551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.051580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.153 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.153 [2024-11-06 15:42:00.051984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.052022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.052392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.052422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.052658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.052687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.052941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.052972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.053306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.053334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.053656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.053685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.054071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.054101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.054314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.054346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.054707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.054736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.055122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.055152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.055523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.055553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.055798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.055832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.056190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.056219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.056570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.056607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.056955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.056986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.057190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.057217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.057621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.153 [2024-11-06 15:42:00.057650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.153 qpair failed and we were unable to recover it. 00:29:42.153 [2024-11-06 15:42:00.057906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.057938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.058282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.058311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.058524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.058551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.058986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.059016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.059344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.059373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.059538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.059566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.059864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.059893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.060248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.060277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.060511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.060538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.060944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.060974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.061227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.061257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.061632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.061661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.061951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.061980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.062254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.062283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.062544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.062571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.062875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.062904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.063043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.063072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:42.154 [2024-11-06 15:42:00.063368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.063398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.154 [2024-11-06 15:42:00.063810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.063844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.154 [2024-11-06 15:42:00.064055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.064083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.064437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.064466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.064637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.064668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.064910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.064940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.065299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.065327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.065671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.065700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.066077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.066107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.066500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.066529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.066754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.066786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.067142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.067171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.067535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.067564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.067909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.067940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.068276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.068305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.068668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.068697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.154 qpair failed and we were unable to recover it. 00:29:42.154 [2024-11-06 15:42:00.068900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.154 [2024-11-06 15:42:00.068929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.069292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.069320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.069565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.069601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.069994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.070024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.070248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.070277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.070477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.070506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.070823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.070852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.071080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.071108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.071336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.071368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.071585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.071612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.072002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.072032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.072258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.072286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.072516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.072544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.072737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.072775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.073142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.073171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.073533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.073562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.073906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.073937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.074143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.074173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.074441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.074469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.074685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.074714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.155 [2024-11-06 15:42:00.075104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.075134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:42.155 [2024-11-06 15:42:00.075489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.075517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.075760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.075791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.155 [2024-11-06 15:42:00.076154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.076183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.076670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.076698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.077081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.077113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.077329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.077357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.077723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.077768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.077890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.077917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.078306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.078336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.078535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.078563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.078881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.078912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.079209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.079237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.079445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.079474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.079689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.079717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.080126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.080156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.080394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.080423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.155 qpair failed and we were unable to recover it. 00:29:42.155 [2024-11-06 15:42:00.080770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.155 [2024-11-06 15:42:00.080800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.156 qpair failed and we were unable to recover it. 00:29:42.156 [2024-11-06 15:42:00.081154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.156 [2024-11-06 15:42:00.081183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.156 qpair failed and we were unable to recover it. 00:29:42.156 [2024-11-06 15:42:00.081409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.156 [2024-11-06 15:42:00.081437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.156 qpair failed and we were unable to recover it. 00:29:42.156 [2024-11-06 15:42:00.081848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.156 [2024-11-06 15:42:00.081879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131f010 with addr=10.0.0.2, port=4420 00:29:42.156 qpair failed and we were unable to recover it. 00:29:42.156 [2024-11-06 15:42:00.082114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.156 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.156 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:42.156 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.156 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.156 [2024-11-06 15:42:00.092808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.156 [2024-11-06 15:42:00.092922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.156 [2024-11-06 15:42:00.092967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.156 [2024-11-06 15:42:00.092989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.156 [2024-11-06 15:42:00.093010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.156 [2024-11-06 15:42:00.093059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.156 qpair failed and we were unable to recover it. 00:29:42.156 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.156 15:42:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3967606 00:29:42.418 [2024-11-06 15:42:00.102685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.418 [2024-11-06 15:42:00.102763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.418 [2024-11-06 15:42:00.102790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.418 [2024-11-06 15:42:00.102804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.418 [2024-11-06 15:42:00.102817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.418 [2024-11-06 15:42:00.102844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-06 15:42:00.112637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.418 [2024-11-06 15:42:00.112765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.418 [2024-11-06 15:42:00.112784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.418 [2024-11-06 15:42:00.112794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.418 [2024-11-06 15:42:00.112803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.418 [2024-11-06 15:42:00.112822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-06 15:42:00.122663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.418 [2024-11-06 15:42:00.122716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.418 [2024-11-06 15:42:00.122732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.418 [2024-11-06 15:42:00.122739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.418 [2024-11-06 15:42:00.122749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.418 [2024-11-06 15:42:00.122763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-06 15:42:00.132669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.418 [2024-11-06 15:42:00.132723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.418 [2024-11-06 15:42:00.132736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.418 [2024-11-06 15:42:00.132743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.418 [2024-11-06 15:42:00.132754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.418 [2024-11-06 15:42:00.132768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-06 15:42:00.142554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.418 [2024-11-06 15:42:00.142602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.418 [2024-11-06 15:42:00.142615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.418 [2024-11-06 15:42:00.142622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.418 [2024-11-06 15:42:00.142629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.418 [2024-11-06 15:42:00.142642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-06 15:42:00.152725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.418 [2024-11-06 15:42:00.152775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.418 [2024-11-06 15:42:00.152788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.418 [2024-11-06 15:42:00.152795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.418 [2024-11-06 15:42:00.152802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.418 [2024-11-06 15:42:00.152815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-06 15:42:00.162705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.418 [2024-11-06 15:42:00.162757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.418 [2024-11-06 15:42:00.162770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.418 [2024-11-06 15:42:00.162777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.418 [2024-11-06 15:42:00.162787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.418 [2024-11-06 15:42:00.162800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-06 15:42:00.172682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.418 [2024-11-06 15:42:00.172778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.418 [2024-11-06 15:42:00.172792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.418 [2024-11-06 15:42:00.172798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.418 [2024-11-06 15:42:00.172805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.418 [2024-11-06 15:42:00.172818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-06 15:42:00.182691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.418 [2024-11-06 15:42:00.182783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.418 [2024-11-06 15:42:00.182796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.418 [2024-11-06 15:42:00.182804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.418 [2024-11-06 15:42:00.182810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.418 [2024-11-06 15:42:00.182823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-06 15:42:00.192822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.418 [2024-11-06 15:42:00.192867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.418 [2024-11-06 15:42:00.192880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.418 [2024-11-06 15:42:00.192887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.418 [2024-11-06 15:42:00.192893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.418 [2024-11-06 15:42:00.192906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-06 15:42:00.202812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.418 [2024-11-06 15:42:00.202859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.418 [2024-11-06 15:42:00.202872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.418 [2024-11-06 15:42:00.202880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.418 [2024-11-06 15:42:00.202886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.418 [2024-11-06 15:42:00.202899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-06 15:42:00.212879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.418 [2024-11-06 15:42:00.212978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.418 [2024-11-06 15:42:00.212991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.418 [2024-11-06 15:42:00.212998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.418 [2024-11-06 15:42:00.213004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.418 [2024-11-06 15:42:00.213017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-06 15:42:00.222873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.418 [2024-11-06 15:42:00.222924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.418 [2024-11-06 15:42:00.222937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.418 [2024-11-06 15:42:00.222944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.418 [2024-11-06 15:42:00.222950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.418 [2024-11-06 15:42:00.222963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-06 15:42:00.232817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.418 [2024-11-06 15:42:00.232875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.418 [2024-11-06 15:42:00.232887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.418 [2024-11-06 15:42:00.232895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.418 [2024-11-06 15:42:00.232901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.418 [2024-11-06 15:42:00.232914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-06 15:42:00.242805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.418 [2024-11-06 15:42:00.242864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.418 [2024-11-06 15:42:00.242877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-06 15:42:00.242884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-06 15:42:00.242890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.419 [2024-11-06 15:42:00.242903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-06 15:42:00.253007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-06 15:42:00.253058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-06 15:42:00.253074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-06 15:42:00.253081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-06 15:42:00.253088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.419 [2024-11-06 15:42:00.253101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-06 15:42:00.262985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-06 15:42:00.263036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-06 15:42:00.263051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-06 15:42:00.263058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-06 15:42:00.263065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.419 [2024-11-06 15:42:00.263079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-06 15:42:00.273034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-06 15:42:00.273087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-06 15:42:00.273100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-06 15:42:00.273107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-06 15:42:00.273113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.419 [2024-11-06 15:42:00.273126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-06 15:42:00.283032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-06 15:42:00.283106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-06 15:42:00.283119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-06 15:42:00.283126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-06 15:42:00.283132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.419 [2024-11-06 15:42:00.283145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-06 15:42:00.293105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-06 15:42:00.293153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-06 15:42:00.293166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-06 15:42:00.293173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-06 15:42:00.293183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.419 [2024-11-06 15:42:00.293196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-06 15:42:00.303164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-06 15:42:00.303226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-06 15:42:00.303239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-06 15:42:00.303246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-06 15:42:00.303252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.419 [2024-11-06 15:42:00.303265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-06 15:42:00.313263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-06 15:42:00.313319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-06 15:42:00.313331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-06 15:42:00.313339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-06 15:42:00.313345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.419 [2024-11-06 15:42:00.313358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-06 15:42:00.323113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-06 15:42:00.323157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-06 15:42:00.323171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-06 15:42:00.323178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-06 15:42:00.323184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.419 [2024-11-06 15:42:00.323198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-06 15:42:00.333254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-06 15:42:00.333304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-06 15:42:00.333317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-06 15:42:00.333325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-06 15:42:00.333331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.419 [2024-11-06 15:42:00.333344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-06 15:42:00.343271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-06 15:42:00.343321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-06 15:42:00.343334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-06 15:42:00.343341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-06 15:42:00.343347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.419 [2024-11-06 15:42:00.343360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-06 15:42:00.353237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-06 15:42:00.353288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-06 15:42:00.353300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-06 15:42:00.353307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-06 15:42:00.353314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.419 [2024-11-06 15:42:00.353326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-06 15:42:00.363232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-06 15:42:00.363278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-06 15:42:00.363291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-06 15:42:00.363297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-06 15:42:00.363303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.419 [2024-11-06 15:42:00.363316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-06 15:42:00.373300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-06 15:42:00.373355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-06 15:42:00.373368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-06 15:42:00.373375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-06 15:42:00.373381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.420 [2024-11-06 15:42:00.373394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.420 qpair failed and we were unable to recover it. 00:29:42.420 [2024-11-06 15:42:00.383321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.420 [2024-11-06 15:42:00.383403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.420 [2024-11-06 15:42:00.383419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.420 [2024-11-06 15:42:00.383427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.420 [2024-11-06 15:42:00.383433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.420 [2024-11-06 15:42:00.383445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.420 qpair failed and we were unable to recover it. 00:29:42.420 [2024-11-06 15:42:00.393353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.420 [2024-11-06 15:42:00.393401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.420 [2024-11-06 15:42:00.393414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.420 [2024-11-06 15:42:00.393421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.420 [2024-11-06 15:42:00.393427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.420 [2024-11-06 15:42:00.393440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.420 qpair failed and we were unable to recover it. 00:29:42.682 [2024-11-06 15:42:00.403358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.682 [2024-11-06 15:42:00.403405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.682 [2024-11-06 15:42:00.403418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.682 [2024-11-06 15:42:00.403425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.682 [2024-11-06 15:42:00.403431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.682 [2024-11-06 15:42:00.403444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-11-06 15:42:00.413414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.682 [2024-11-06 15:42:00.413470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.682 [2024-11-06 15:42:00.413483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.682 [2024-11-06 15:42:00.413490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.682 [2024-11-06 15:42:00.413497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.682 [2024-11-06 15:42:00.413511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-11-06 15:42:00.423423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.682 [2024-11-06 15:42:00.423474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.682 [2024-11-06 15:42:00.423498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.682 [2024-11-06 15:42:00.423507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.682 [2024-11-06 15:42:00.423518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.682 [2024-11-06 15:42:00.423537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-11-06 15:42:00.433454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.682 [2024-11-06 15:42:00.433507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.682 [2024-11-06 15:42:00.433532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.682 [2024-11-06 15:42:00.433540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.682 [2024-11-06 15:42:00.433547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.682 [2024-11-06 15:42:00.433565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-11-06 15:42:00.443450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.682 [2024-11-06 15:42:00.443506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.682 [2024-11-06 15:42:00.443530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.682 [2024-11-06 15:42:00.443539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.682 [2024-11-06 15:42:00.443546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.682 [2024-11-06 15:42:00.443564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-11-06 15:42:00.453403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.682 [2024-11-06 15:42:00.453457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.682 [2024-11-06 15:42:00.453472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.682 [2024-11-06 15:42:00.453479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.682 [2024-11-06 15:42:00.453486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.682 [2024-11-06 15:42:00.453500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-11-06 15:42:00.463406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.682 [2024-11-06 15:42:00.463452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.682 [2024-11-06 15:42:00.463466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.682 [2024-11-06 15:42:00.463473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.682 [2024-11-06 15:42:00.463479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.682 [2024-11-06 15:42:00.463493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-11-06 15:42:00.473569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.682 [2024-11-06 15:42:00.473622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.682 [2024-11-06 15:42:00.473647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.682 [2024-11-06 15:42:00.473656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.682 [2024-11-06 15:42:00.473663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.682 [2024-11-06 15:42:00.473682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-11-06 15:42:00.483557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.682 [2024-11-06 15:42:00.483607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.682 [2024-11-06 15:42:00.483622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.682 [2024-11-06 15:42:00.483629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.682 [2024-11-06 15:42:00.483636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.682 [2024-11-06 15:42:00.483650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-11-06 15:42:00.493633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.682 [2024-11-06 15:42:00.493683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.682 [2024-11-06 15:42:00.493697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.682 [2024-11-06 15:42:00.493705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.682 [2024-11-06 15:42:00.493712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.682 [2024-11-06 15:42:00.493725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-11-06 15:42:00.503611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.682 [2024-11-06 15:42:00.503694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.682 [2024-11-06 15:42:00.503708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.682 [2024-11-06 15:42:00.503715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.682 [2024-11-06 15:42:00.503721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.682 [2024-11-06 15:42:00.503734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-11-06 15:42:00.513663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.682 [2024-11-06 15:42:00.513711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.682 [2024-11-06 15:42:00.513728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.682 [2024-11-06 15:42:00.513736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.682 [2024-11-06 15:42:00.513742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.682 [2024-11-06 15:42:00.513762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-11-06 15:42:00.523642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.682 [2024-11-06 15:42:00.523688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.682 [2024-11-06 15:42:00.523701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.682 [2024-11-06 15:42:00.523709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.682 [2024-11-06 15:42:00.523715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.683 [2024-11-06 15:42:00.523728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-06 15:42:00.533710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-06 15:42:00.533804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-06 15:42:00.533817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-06 15:42:00.533824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-06 15:42:00.533830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.683 [2024-11-06 15:42:00.533843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-06 15:42:00.543742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-06 15:42:00.543802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-06 15:42:00.543816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-06 15:42:00.543823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-06 15:42:00.543831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.683 [2024-11-06 15:42:00.543848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-06 15:42:00.553749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-06 15:42:00.553798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-06 15:42:00.553811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-06 15:42:00.553818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-06 15:42:00.553828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.683 [2024-11-06 15:42:00.553841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-06 15:42:00.563762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-06 15:42:00.563808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-06 15:42:00.563822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-06 15:42:00.563829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-06 15:42:00.563836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.683 [2024-11-06 15:42:00.563849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-06 15:42:00.573821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-06 15:42:00.573888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-06 15:42:00.573901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-06 15:42:00.573908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-06 15:42:00.573914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.683 [2024-11-06 15:42:00.573927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-06 15:42:00.583840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-06 15:42:00.583888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-06 15:42:00.583900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-06 15:42:00.583907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-06 15:42:00.583914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.683 [2024-11-06 15:42:00.583927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-06 15:42:00.593831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-06 15:42:00.593881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-06 15:42:00.593894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-06 15:42:00.593901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-06 15:42:00.593907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.683 [2024-11-06 15:42:00.593920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-06 15:42:00.603762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-06 15:42:00.603815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-06 15:42:00.603827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-06 15:42:00.603834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-06 15:42:00.603840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.683 [2024-11-06 15:42:00.603853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-06 15:42:00.613807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-06 15:42:00.613856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-06 15:42:00.613869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-06 15:42:00.613876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-06 15:42:00.613882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.683 [2024-11-06 15:42:00.613896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-06 15:42:00.623834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-06 15:42:00.623894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-06 15:42:00.623907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-06 15:42:00.623913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-06 15:42:00.623920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.683 [2024-11-06 15:42:00.623932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-06 15:42:00.633996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-06 15:42:00.634046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-06 15:42:00.634059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-06 15:42:00.634066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-06 15:42:00.634072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.683 [2024-11-06 15:42:00.634085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-06 15:42:00.643888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-06 15:42:00.643940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-06 15:42:00.643957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-06 15:42:00.643964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-06 15:42:00.643970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.683 [2024-11-06 15:42:00.643983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-06 15:42:00.654057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-06 15:42:00.654110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-06 15:42:00.654123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-06 15:42:00.654130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-06 15:42:00.654137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.683 [2024-11-06 15:42:00.654150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.945 [2024-11-06 15:42:00.664077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.945 [2024-11-06 15:42:00.664162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.945 [2024-11-06 15:42:00.664175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.945 [2024-11-06 15:42:00.664182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.945 [2024-11-06 15:42:00.664188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.945 [2024-11-06 15:42:00.664201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.945 qpair failed and we were unable to recover it. 00:29:42.945 [2024-11-06 15:42:00.674072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.945 [2024-11-06 15:42:00.674120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.945 [2024-11-06 15:42:00.674133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.945 [2024-11-06 15:42:00.674140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.945 [2024-11-06 15:42:00.674146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.945 [2024-11-06 15:42:00.674159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.945 qpair failed and we were unable to recover it. 00:29:42.945 [2024-11-06 15:42:00.683960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.945 [2024-11-06 15:42:00.684016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.945 [2024-11-06 15:42:00.684032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.945 [2024-11-06 15:42:00.684040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.945 [2024-11-06 15:42:00.684052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.945 [2024-11-06 15:42:00.684070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.945 qpair failed and we were unable to recover it. 00:29:42.945 [2024-11-06 15:42:00.694141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.945 [2024-11-06 15:42:00.694225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.945 [2024-11-06 15:42:00.694239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.945 [2024-11-06 15:42:00.694246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.945 [2024-11-06 15:42:00.694252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.945 [2024-11-06 15:42:00.694265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.945 qpair failed and we were unable to recover it. 00:29:42.945 [2024-11-06 15:42:00.704153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.945 [2024-11-06 15:42:00.704201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.945 [2024-11-06 15:42:00.704215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.945 [2024-11-06 15:42:00.704222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.945 [2024-11-06 15:42:00.704228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.945 [2024-11-06 15:42:00.704241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.945 qpair failed and we were unable to recover it. 00:29:42.945 [2024-11-06 15:42:00.714200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.945 [2024-11-06 15:42:00.714254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.945 [2024-11-06 15:42:00.714267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.945 [2024-11-06 15:42:00.714274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.945 [2024-11-06 15:42:00.714280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.945 [2024-11-06 15:42:00.714293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.945 qpair failed and we were unable to recover it. 00:29:42.945 [2024-11-06 15:42:00.724184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.945 [2024-11-06 15:42:00.724276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.946 [2024-11-06 15:42:00.724289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.946 [2024-11-06 15:42:00.724296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.946 [2024-11-06 15:42:00.724303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.946 [2024-11-06 15:42:00.724315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.946 qpair failed and we were unable to recover it. 00:29:42.946 [2024-11-06 15:42:00.734175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.946 [2024-11-06 15:42:00.734226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.946 [2024-11-06 15:42:00.734240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.946 [2024-11-06 15:42:00.734247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.946 [2024-11-06 15:42:00.734254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.946 [2024-11-06 15:42:00.734267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.946 qpair failed and we were unable to recover it. 00:29:42.946 [2024-11-06 15:42:00.744290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.946 [2024-11-06 15:42:00.744338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.946 [2024-11-06 15:42:00.744352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.946 [2024-11-06 15:42:00.744358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.946 [2024-11-06 15:42:00.744365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.946 [2024-11-06 15:42:00.744378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.946 qpair failed and we were unable to recover it. 00:29:42.946 [2024-11-06 15:42:00.754220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.946 [2024-11-06 15:42:00.754269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.946 [2024-11-06 15:42:00.754282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.946 [2024-11-06 15:42:00.754289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.946 [2024-11-06 15:42:00.754296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.946 [2024-11-06 15:42:00.754308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.946 qpair failed and we were unable to recover it. 00:29:42.946 [2024-11-06 15:42:00.764297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.946 [2024-11-06 15:42:00.764345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.946 [2024-11-06 15:42:00.764358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.946 [2024-11-06 15:42:00.764366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.946 [2024-11-06 15:42:00.764372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.946 [2024-11-06 15:42:00.764385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.946 qpair failed and we were unable to recover it. 00:29:42.946 [2024-11-06 15:42:00.774366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.946 [2024-11-06 15:42:00.774417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.946 [2024-11-06 15:42:00.774433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.946 [2024-11-06 15:42:00.774440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.946 [2024-11-06 15:42:00.774446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.946 [2024-11-06 15:42:00.774459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.946 qpair failed and we were unable to recover it. 00:29:42.946 [2024-11-06 15:42:00.784388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.946 [2024-11-06 15:42:00.784440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.946 [2024-11-06 15:42:00.784454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.946 [2024-11-06 15:42:00.784461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.946 [2024-11-06 15:42:00.784467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.946 [2024-11-06 15:42:00.784480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.946 qpair failed and we were unable to recover it. 00:29:42.946 [2024-11-06 15:42:00.794281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.946 [2024-11-06 15:42:00.794359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.946 [2024-11-06 15:42:00.794371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.946 [2024-11-06 15:42:00.794378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.946 [2024-11-06 15:42:00.794385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.946 [2024-11-06 15:42:00.794398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.946 qpair failed and we were unable to recover it. 00:29:42.946 [2024-11-06 15:42:00.804414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.946 [2024-11-06 15:42:00.804461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.946 [2024-11-06 15:42:00.804474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.946 [2024-11-06 15:42:00.804481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.946 [2024-11-06 15:42:00.804487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.946 [2024-11-06 15:42:00.804500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.946 qpair failed and we were unable to recover it. 00:29:42.946 [2024-11-06 15:42:00.814472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.946 [2024-11-06 15:42:00.814518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.946 [2024-11-06 15:42:00.814531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.946 [2024-11-06 15:42:00.814539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.946 [2024-11-06 15:42:00.814548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.946 [2024-11-06 15:42:00.814561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.946 qpair failed and we were unable to recover it. 00:29:42.946 [2024-11-06 15:42:00.824476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.946 [2024-11-06 15:42:00.824527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.946 [2024-11-06 15:42:00.824540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.946 [2024-11-06 15:42:00.824548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.946 [2024-11-06 15:42:00.824554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.946 [2024-11-06 15:42:00.824567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.946 qpair failed and we were unable to recover it. 00:29:42.946 [2024-11-06 15:42:00.834502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.946 [2024-11-06 15:42:00.834553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.946 [2024-11-06 15:42:00.834567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.946 [2024-11-06 15:42:00.834574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.946 [2024-11-06 15:42:00.834580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.946 [2024-11-06 15:42:00.834593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.946 qpair failed and we were unable to recover it. 00:29:42.946 [2024-11-06 15:42:00.844519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.946 [2024-11-06 15:42:00.844573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.946 [2024-11-06 15:42:00.844586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.946 [2024-11-06 15:42:00.844593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.946 [2024-11-06 15:42:00.844599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.946 [2024-11-06 15:42:00.844612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.946 qpair failed and we were unable to recover it. 00:29:42.946 [2024-11-06 15:42:00.854598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.946 [2024-11-06 15:42:00.854646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.946 [2024-11-06 15:42:00.854659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.946 [2024-11-06 15:42:00.854667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.947 [2024-11-06 15:42:00.854673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.947 [2024-11-06 15:42:00.854686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.947 qpair failed and we were unable to recover it. 00:29:42.947 [2024-11-06 15:42:00.864587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.947 [2024-11-06 15:42:00.864641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.947 [2024-11-06 15:42:00.864654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.947 [2024-11-06 15:42:00.864661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.947 [2024-11-06 15:42:00.864667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.947 [2024-11-06 15:42:00.864680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.947 qpair failed and we were unable to recover it. 00:29:42.947 [2024-11-06 15:42:00.874623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.947 [2024-11-06 15:42:00.874671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.947 [2024-11-06 15:42:00.874684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.947 [2024-11-06 15:42:00.874691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.947 [2024-11-06 15:42:00.874697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.947 [2024-11-06 15:42:00.874710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.947 qpair failed and we were unable to recover it. 00:29:42.947 [2024-11-06 15:42:00.884612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.947 [2024-11-06 15:42:00.884662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.947 [2024-11-06 15:42:00.884674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.947 [2024-11-06 15:42:00.884681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.947 [2024-11-06 15:42:00.884687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.947 [2024-11-06 15:42:00.884700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.947 qpair failed and we were unable to recover it. 00:29:42.947 [2024-11-06 15:42:00.894724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.947 [2024-11-06 15:42:00.894796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.947 [2024-11-06 15:42:00.894810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.947 [2024-11-06 15:42:00.894817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.947 [2024-11-06 15:42:00.894823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.947 [2024-11-06 15:42:00.894836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.947 qpair failed and we were unable to recover it. 00:29:42.947 [2024-11-06 15:42:00.904691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.947 [2024-11-06 15:42:00.904743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.947 [2024-11-06 15:42:00.904763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.947 [2024-11-06 15:42:00.904769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.947 [2024-11-06 15:42:00.904776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.947 [2024-11-06 15:42:00.904789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.947 qpair failed and we were unable to recover it. 00:29:42.947 [2024-11-06 15:42:00.914725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.947 [2024-11-06 15:42:00.914779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.947 [2024-11-06 15:42:00.914793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.947 [2024-11-06 15:42:00.914800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.947 [2024-11-06 15:42:00.914806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.947 [2024-11-06 15:42:00.914819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.947 qpair failed and we were unable to recover it. 00:29:42.947 [2024-11-06 15:42:00.924739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.947 [2024-11-06 15:42:00.924792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.947 [2024-11-06 15:42:00.924805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.947 [2024-11-06 15:42:00.924812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.947 [2024-11-06 15:42:00.924818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:42.947 [2024-11-06 15:42:00.924831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.947 qpair failed and we were unable to recover it. 00:29:43.209 [2024-11-06 15:42:00.934803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.209 [2024-11-06 15:42:00.934850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.209 [2024-11-06 15:42:00.934863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.209 [2024-11-06 15:42:00.934870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.209 [2024-11-06 15:42:00.934877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.209 [2024-11-06 15:42:00.934890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.209 qpair failed and we were unable to recover it. 00:29:43.209 [2024-11-06 15:42:00.944820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.209 [2024-11-06 15:42:00.944868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.209 [2024-11-06 15:42:00.944880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.209 [2024-11-06 15:42:00.944887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.209 [2024-11-06 15:42:00.944897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.209 [2024-11-06 15:42:00.944910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.209 qpair failed and we were unable to recover it. 00:29:43.209 [2024-11-06 15:42:00.954813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.209 [2024-11-06 15:42:00.954859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.209 [2024-11-06 15:42:00.954871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.209 [2024-11-06 15:42:00.954878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.209 [2024-11-06 15:42:00.954884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.209 [2024-11-06 15:42:00.954898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.209 qpair failed and we were unable to recover it. 00:29:43.209 [2024-11-06 15:42:00.964765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.209 [2024-11-06 15:42:00.964809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.209 [2024-11-06 15:42:00.964822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.209 [2024-11-06 15:42:00.964829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.209 [2024-11-06 15:42:00.964836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.209 [2024-11-06 15:42:00.964848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.209 qpair failed and we were unable to recover it. 00:29:43.209 [2024-11-06 15:42:00.974783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.209 [2024-11-06 15:42:00.974834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.209 [2024-11-06 15:42:00.974847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.209 [2024-11-06 15:42:00.974854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.209 [2024-11-06 15:42:00.974860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.209 [2024-11-06 15:42:00.974873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.209 qpair failed and we were unable to recover it. 00:29:43.209 [2024-11-06 15:42:00.984924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.209 [2024-11-06 15:42:00.984974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.209 [2024-11-06 15:42:00.984987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.209 [2024-11-06 15:42:00.984993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.209 [2024-11-06 15:42:00.985000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.209 [2024-11-06 15:42:00.985013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.209 qpair failed and we were unable to recover it. 00:29:43.209 [2024-11-06 15:42:00.995007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.209 [2024-11-06 15:42:00.995107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.209 [2024-11-06 15:42:00.995119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.209 [2024-11-06 15:42:00.995126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.209 [2024-11-06 15:42:00.995132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.209 [2024-11-06 15:42:00.995145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.209 qpair failed and we were unable to recover it. 00:29:43.209 [2024-11-06 15:42:01.004831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.210 [2024-11-06 15:42:01.004876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.210 [2024-11-06 15:42:01.004891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.210 [2024-11-06 15:42:01.004898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.210 [2024-11-06 15:42:01.004904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.210 [2024-11-06 15:42:01.004917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.210 qpair failed and we were unable to recover it. 00:29:43.210 [2024-11-06 15:42:01.015072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.210 [2024-11-06 15:42:01.015132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.210 [2024-11-06 15:42:01.015150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.210 [2024-11-06 15:42:01.015157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.210 [2024-11-06 15:42:01.015164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.210 [2024-11-06 15:42:01.015179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.210 qpair failed and we were unable to recover it. 00:29:43.210 [2024-11-06 15:42:01.024907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.210 [2024-11-06 15:42:01.024960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.210 [2024-11-06 15:42:01.024974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.210 [2024-11-06 15:42:01.024981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.210 [2024-11-06 15:42:01.024987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.210 [2024-11-06 15:42:01.025001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.210 qpair failed and we were unable to recover it. 00:29:43.210 [2024-11-06 15:42:01.035070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.210 [2024-11-06 15:42:01.035119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.210 [2024-11-06 15:42:01.035136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.210 [2024-11-06 15:42:01.035143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.210 [2024-11-06 15:42:01.035149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.210 [2024-11-06 15:42:01.035163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.210 qpair failed and we were unable to recover it. 00:29:43.210 [2024-11-06 15:42:01.045085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.210 [2024-11-06 15:42:01.045130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.210 [2024-11-06 15:42:01.045143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.210 [2024-11-06 15:42:01.045150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.210 [2024-11-06 15:42:01.045156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.210 [2024-11-06 15:42:01.045169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.210 qpair failed and we were unable to recover it. 00:29:43.210 [2024-11-06 15:42:01.055154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.210 [2024-11-06 15:42:01.055205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.210 [2024-11-06 15:42:01.055217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.210 [2024-11-06 15:42:01.055224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.210 [2024-11-06 15:42:01.055231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.210 [2024-11-06 15:42:01.055243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.210 qpair failed and we were unable to recover it. 00:29:43.210 [2024-11-06 15:42:01.065147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.210 [2024-11-06 15:42:01.065193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.210 [2024-11-06 15:42:01.065206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.210 [2024-11-06 15:42:01.065213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.210 [2024-11-06 15:42:01.065219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.210 [2024-11-06 15:42:01.065232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.210 qpair failed and we were unable to recover it. 00:29:43.210 [2024-11-06 15:42:01.075212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.210 [2024-11-06 15:42:01.075283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.210 [2024-11-06 15:42:01.075296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.210 [2024-11-06 15:42:01.075303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.210 [2024-11-06 15:42:01.075313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.210 [2024-11-06 15:42:01.075326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.210 qpair failed and we were unable to recover it. 00:29:43.210 [2024-11-06 15:42:01.085167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.210 [2024-11-06 15:42:01.085216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.210 [2024-11-06 15:42:01.085229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.210 [2024-11-06 15:42:01.085236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.210 [2024-11-06 15:42:01.085242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.210 [2024-11-06 15:42:01.085256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.210 qpair failed and we were unable to recover it. 00:29:43.210 [2024-11-06 15:42:01.095240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.210 [2024-11-06 15:42:01.095292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.210 [2024-11-06 15:42:01.095305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.210 [2024-11-06 15:42:01.095312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.210 [2024-11-06 15:42:01.095319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.210 [2024-11-06 15:42:01.095332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.210 qpair failed and we were unable to recover it. 00:29:43.210 [2024-11-06 15:42:01.105240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.210 [2024-11-06 15:42:01.105324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.210 [2024-11-06 15:42:01.105337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.210 [2024-11-06 15:42:01.105343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.210 [2024-11-06 15:42:01.105350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.210 [2024-11-06 15:42:01.105363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.210 qpair failed and we were unable to recover it. 00:29:43.210 [2024-11-06 15:42:01.115146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.210 [2024-11-06 15:42:01.115215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.210 [2024-11-06 15:42:01.115228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.210 [2024-11-06 15:42:01.115235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.210 [2024-11-06 15:42:01.115241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.210 [2024-11-06 15:42:01.115254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.210 qpair failed and we were unable to recover it. 00:29:43.210 [2024-11-06 15:42:01.125270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.210 [2024-11-06 15:42:01.125322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.210 [2024-11-06 15:42:01.125335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.210 [2024-11-06 15:42:01.125342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.210 [2024-11-06 15:42:01.125348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.210 [2024-11-06 15:42:01.125361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.210 qpair failed and we were unable to recover it. 00:29:43.210 [2024-11-06 15:42:01.135361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.210 [2024-11-06 15:42:01.135413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.211 [2024-11-06 15:42:01.135426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.211 [2024-11-06 15:42:01.135433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.211 [2024-11-06 15:42:01.135439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.211 [2024-11-06 15:42:01.135453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.211 qpair failed and we were unable to recover it. 00:29:43.211 [2024-11-06 15:42:01.145360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.211 [2024-11-06 15:42:01.145409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.211 [2024-11-06 15:42:01.145422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.211 [2024-11-06 15:42:01.145429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.211 [2024-11-06 15:42:01.145435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.211 [2024-11-06 15:42:01.145449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.211 qpair failed and we were unable to recover it. 00:29:43.211 [2024-11-06 15:42:01.155301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.211 [2024-11-06 15:42:01.155351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.211 [2024-11-06 15:42:01.155364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.211 [2024-11-06 15:42:01.155371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.211 [2024-11-06 15:42:01.155377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.211 [2024-11-06 15:42:01.155389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.211 qpair failed and we were unable to recover it. 00:29:43.211 [2024-11-06 15:42:01.165378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.211 [2024-11-06 15:42:01.165477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.211 [2024-11-06 15:42:01.165492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.211 [2024-11-06 15:42:01.165500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.211 [2024-11-06 15:42:01.165506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.211 [2024-11-06 15:42:01.165519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.211 qpair failed and we were unable to recover it. 00:29:43.211 [2024-11-06 15:42:01.175331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.211 [2024-11-06 15:42:01.175420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.211 [2024-11-06 15:42:01.175433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.211 [2024-11-06 15:42:01.175440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.211 [2024-11-06 15:42:01.175446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.211 [2024-11-06 15:42:01.175459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.211 qpair failed and we were unable to recover it. 00:29:43.211 [2024-11-06 15:42:01.185372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.211 [2024-11-06 15:42:01.185468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.211 [2024-11-06 15:42:01.185481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.211 [2024-11-06 15:42:01.185488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.211 [2024-11-06 15:42:01.185495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.211 [2024-11-06 15:42:01.185507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.211 qpair failed and we were unable to recover it. 00:29:43.472 [2024-11-06 15:42:01.195370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.472 [2024-11-06 15:42:01.195419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.473 [2024-11-06 15:42:01.195432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.473 [2024-11-06 15:42:01.195438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.473 [2024-11-06 15:42:01.195445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.473 [2024-11-06 15:42:01.195458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.473 qpair failed and we were unable to recover it. 00:29:43.473 [2024-11-06 15:42:01.205437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.473 [2024-11-06 15:42:01.205498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.473 [2024-11-06 15:42:01.205512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.473 [2024-11-06 15:42:01.205519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.473 [2024-11-06 15:42:01.205529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.473 [2024-11-06 15:42:01.205542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.473 qpair failed and we were unable to recover it. 00:29:43.473 [2024-11-06 15:42:01.215562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.473 [2024-11-06 15:42:01.215613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.473 [2024-11-06 15:42:01.215626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.473 [2024-11-06 15:42:01.215634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.473 [2024-11-06 15:42:01.215640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.473 [2024-11-06 15:42:01.215654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.473 qpair failed and we were unable to recover it. 00:29:43.473 [2024-11-06 15:42:01.225588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.473 [2024-11-06 15:42:01.225638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.473 [2024-11-06 15:42:01.225651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.473 [2024-11-06 15:42:01.225657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.473 [2024-11-06 15:42:01.225663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.473 [2024-11-06 15:42:01.225676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.473 qpair failed and we were unable to recover it. 00:29:43.473 [2024-11-06 15:42:01.235472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.473 [2024-11-06 15:42:01.235519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.473 [2024-11-06 15:42:01.235532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.473 [2024-11-06 15:42:01.235539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.473 [2024-11-06 15:42:01.235545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.473 [2024-11-06 15:42:01.235558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.473 qpair failed and we were unable to recover it. 00:29:43.473 [2024-11-06 15:42:01.245594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.473 [2024-11-06 15:42:01.245643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.473 [2024-11-06 15:42:01.245656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.473 [2024-11-06 15:42:01.245663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.473 [2024-11-06 15:42:01.245669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.473 [2024-11-06 15:42:01.245683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.473 qpair failed and we were unable to recover it. 00:29:43.473 [2024-11-06 15:42:01.255640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.473 [2024-11-06 15:42:01.255687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.473 [2024-11-06 15:42:01.255701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.473 [2024-11-06 15:42:01.255707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.473 [2024-11-06 15:42:01.255714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.473 [2024-11-06 15:42:01.255726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.473 qpair failed and we were unable to recover it. 00:29:43.473 [2024-11-06 15:42:01.265579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.473 [2024-11-06 15:42:01.265630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.473 [2024-11-06 15:42:01.265643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.473 [2024-11-06 15:42:01.265651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.473 [2024-11-06 15:42:01.265657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.473 [2024-11-06 15:42:01.265670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.473 qpair failed and we were unable to recover it. 00:29:43.473 [2024-11-06 15:42:01.275704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.473 [2024-11-06 15:42:01.275757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.473 [2024-11-06 15:42:01.275771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.473 [2024-11-06 15:42:01.275777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.473 [2024-11-06 15:42:01.275784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.473 [2024-11-06 15:42:01.275797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.473 qpair failed and we were unable to recover it. 00:29:43.473 [2024-11-06 15:42:01.285703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.473 [2024-11-06 15:42:01.285754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.473 [2024-11-06 15:42:01.285768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.473 [2024-11-06 15:42:01.285775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.473 [2024-11-06 15:42:01.285781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.473 [2024-11-06 15:42:01.285794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.473 qpair failed and we were unable to recover it. 00:29:43.473 [2024-11-06 15:42:01.295763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.473 [2024-11-06 15:42:01.295814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.473 [2024-11-06 15:42:01.295830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.473 [2024-11-06 15:42:01.295837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.473 [2024-11-06 15:42:01.295843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.473 [2024-11-06 15:42:01.295856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.473 qpair failed and we were unable to recover it. 00:29:43.473 [2024-11-06 15:42:01.305729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.473 [2024-11-06 15:42:01.305788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.473 [2024-11-06 15:42:01.305802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.473 [2024-11-06 15:42:01.305809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.473 [2024-11-06 15:42:01.305816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.473 [2024-11-06 15:42:01.305829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.473 qpair failed and we were unable to recover it. 00:29:43.473 [2024-11-06 15:42:01.315814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.473 [2024-11-06 15:42:01.315867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.473 [2024-11-06 15:42:01.315880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.473 [2024-11-06 15:42:01.315887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.473 [2024-11-06 15:42:01.315893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.473 [2024-11-06 15:42:01.315906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.473 qpair failed and we were unable to recover it. 00:29:43.473 [2024-11-06 15:42:01.325779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.473 [2024-11-06 15:42:01.325827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.474 [2024-11-06 15:42:01.325839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.474 [2024-11-06 15:42:01.325846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.474 [2024-11-06 15:42:01.325853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.474 [2024-11-06 15:42:01.325865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.474 qpair failed and we were unable to recover it. 00:29:43.474 [2024-11-06 15:42:01.335892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.474 [2024-11-06 15:42:01.335942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.474 [2024-11-06 15:42:01.335955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.474 [2024-11-06 15:42:01.335962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.474 [2024-11-06 15:42:01.335972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.474 [2024-11-06 15:42:01.335985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.474 qpair failed and we were unable to recover it. 00:29:43.474 [2024-11-06 15:42:01.345889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.474 [2024-11-06 15:42:01.345939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.474 [2024-11-06 15:42:01.345952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.474 [2024-11-06 15:42:01.345958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.474 [2024-11-06 15:42:01.345965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.474 [2024-11-06 15:42:01.345978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.474 qpair failed and we were unable to recover it. 00:29:43.474 [2024-11-06 15:42:01.355921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.474 [2024-11-06 15:42:01.355987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.474 [2024-11-06 15:42:01.356000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.474 [2024-11-06 15:42:01.356007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.474 [2024-11-06 15:42:01.356013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.474 [2024-11-06 15:42:01.356026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.474 qpair failed and we were unable to recover it. 00:29:43.474 [2024-11-06 15:42:01.365883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.474 [2024-11-06 15:42:01.365927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.474 [2024-11-06 15:42:01.365940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.474 [2024-11-06 15:42:01.365947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.474 [2024-11-06 15:42:01.365953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.474 [2024-11-06 15:42:01.365967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.474 qpair failed and we were unable to recover it. 00:29:43.474 [2024-11-06 15:42:01.376001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.474 [2024-11-06 15:42:01.376048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.474 [2024-11-06 15:42:01.376061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.474 [2024-11-06 15:42:01.376068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.474 [2024-11-06 15:42:01.376074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.474 [2024-11-06 15:42:01.376086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.474 qpair failed and we were unable to recover it. 00:29:43.474 [2024-11-06 15:42:01.385983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.474 [2024-11-06 15:42:01.386046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.474 [2024-11-06 15:42:01.386059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.474 [2024-11-06 15:42:01.386066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.474 [2024-11-06 15:42:01.386072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.474 [2024-11-06 15:42:01.386084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.474 qpair failed and we were unable to recover it. 00:29:43.474 [2024-11-06 15:42:01.396032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.474 [2024-11-06 15:42:01.396080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.474 [2024-11-06 15:42:01.396093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.474 [2024-11-06 15:42:01.396099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.474 [2024-11-06 15:42:01.396105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.474 [2024-11-06 15:42:01.396118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.474 qpair failed and we were unable to recover it. 00:29:43.474 [2024-11-06 15:42:01.405906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.474 [2024-11-06 15:42:01.405953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.474 [2024-11-06 15:42:01.405966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.474 [2024-11-06 15:42:01.405973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.474 [2024-11-06 15:42:01.405979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.474 [2024-11-06 15:42:01.405992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.474 qpair failed and we were unable to recover it. 00:29:43.474 [2024-11-06 15:42:01.416043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.474 [2024-11-06 15:42:01.416105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.474 [2024-11-06 15:42:01.416118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.474 [2024-11-06 15:42:01.416125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.474 [2024-11-06 15:42:01.416131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.474 [2024-11-06 15:42:01.416143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.474 qpair failed and we were unable to recover it. 00:29:43.474 [2024-11-06 15:42:01.426110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.474 [2024-11-06 15:42:01.426162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.474 [2024-11-06 15:42:01.426178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.474 [2024-11-06 15:42:01.426185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.474 [2024-11-06 15:42:01.426191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.474 [2024-11-06 15:42:01.426204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.474 qpair failed and we were unable to recover it. 00:29:43.474 [2024-11-06 15:42:01.436072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.474 [2024-11-06 15:42:01.436135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.474 [2024-11-06 15:42:01.436148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.474 [2024-11-06 15:42:01.436155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.474 [2024-11-06 15:42:01.436161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.474 [2024-11-06 15:42:01.436173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.474 qpair failed and we were unable to recover it. 00:29:43.474 [2024-11-06 15:42:01.445989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.474 [2024-11-06 15:42:01.446032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.474 [2024-11-06 15:42:01.446044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.474 [2024-11-06 15:42:01.446051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.474 [2024-11-06 15:42:01.446057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.474 [2024-11-06 15:42:01.446069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.474 qpair failed and we were unable to recover it. 00:29:43.736 [2024-11-06 15:42:01.456206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.736 [2024-11-06 15:42:01.456261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.736 [2024-11-06 15:42:01.456274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.736 [2024-11-06 15:42:01.456281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.736 [2024-11-06 15:42:01.456288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.736 [2024-11-06 15:42:01.456300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.736 qpair failed and we were unable to recover it. 00:29:43.736 [2024-11-06 15:42:01.466092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.736 [2024-11-06 15:42:01.466157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.736 [2024-11-06 15:42:01.466170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.736 [2024-11-06 15:42:01.466181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.736 [2024-11-06 15:42:01.466187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.736 [2024-11-06 15:42:01.466200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.736 qpair failed and we were unable to recover it. 00:29:43.736 [2024-11-06 15:42:01.476235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.736 [2024-11-06 15:42:01.476288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.736 [2024-11-06 15:42:01.476301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.736 [2024-11-06 15:42:01.476308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.736 [2024-11-06 15:42:01.476314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.736 [2024-11-06 15:42:01.476327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.736 qpair failed and we were unable to recover it. 00:29:43.736 [2024-11-06 15:42:01.486206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.736 [2024-11-06 15:42:01.486254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.736 [2024-11-06 15:42:01.486267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.736 [2024-11-06 15:42:01.486274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.736 [2024-11-06 15:42:01.486280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.736 [2024-11-06 15:42:01.486293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.736 qpair failed and we were unable to recover it. 00:29:43.736 [2024-11-06 15:42:01.496351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.736 [2024-11-06 15:42:01.496423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.736 [2024-11-06 15:42:01.496436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.736 [2024-11-06 15:42:01.496443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.736 [2024-11-06 15:42:01.496449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.736 [2024-11-06 15:42:01.496462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.736 qpair failed and we were unable to recover it. 00:29:43.736 [2024-11-06 15:42:01.506317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.736 [2024-11-06 15:42:01.506367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.736 [2024-11-06 15:42:01.506380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.736 [2024-11-06 15:42:01.506386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.736 [2024-11-06 15:42:01.506392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.736 [2024-11-06 15:42:01.506405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.736 qpair failed and we were unable to recover it. 00:29:43.736 [2024-11-06 15:42:01.516339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.736 [2024-11-06 15:42:01.516392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.736 [2024-11-06 15:42:01.516406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.736 [2024-11-06 15:42:01.516413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.736 [2024-11-06 15:42:01.516419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.736 [2024-11-06 15:42:01.516432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.736 qpair failed and we were unable to recover it. 00:29:43.736 [2024-11-06 15:42:01.526330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.736 [2024-11-06 15:42:01.526379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.736 [2024-11-06 15:42:01.526392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.736 [2024-11-06 15:42:01.526398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.736 [2024-11-06 15:42:01.526405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.736 [2024-11-06 15:42:01.526418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.736 qpair failed and we were unable to recover it. 00:29:43.736 [2024-11-06 15:42:01.536422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.736 [2024-11-06 15:42:01.536506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.736 [2024-11-06 15:42:01.536519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.736 [2024-11-06 15:42:01.536526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.736 [2024-11-06 15:42:01.536532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.736 [2024-11-06 15:42:01.536545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.736 qpair failed and we were unable to recover it. 00:29:43.736 [2024-11-06 15:42:01.546427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.736 [2024-11-06 15:42:01.546477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.736 [2024-11-06 15:42:01.546490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.736 [2024-11-06 15:42:01.546498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.736 [2024-11-06 15:42:01.546504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.736 [2024-11-06 15:42:01.546517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.736 qpair failed and we were unable to recover it. 00:29:43.736 [2024-11-06 15:42:01.556451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.736 [2024-11-06 15:42:01.556498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.736 [2024-11-06 15:42:01.556514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.736 [2024-11-06 15:42:01.556521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.736 [2024-11-06 15:42:01.556527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.736 [2024-11-06 15:42:01.556540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.736 qpair failed and we were unable to recover it. 00:29:43.736 [2024-11-06 15:42:01.566315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.736 [2024-11-06 15:42:01.566362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.736 [2024-11-06 15:42:01.566375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.736 [2024-11-06 15:42:01.566381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.736 [2024-11-06 15:42:01.566387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.736 [2024-11-06 15:42:01.566400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.736 qpair failed and we were unable to recover it. 00:29:43.736 [2024-11-06 15:42:01.576512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.736 [2024-11-06 15:42:01.576560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.737 [2024-11-06 15:42:01.576573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.737 [2024-11-06 15:42:01.576580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.737 [2024-11-06 15:42:01.576587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.737 [2024-11-06 15:42:01.576600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 15:42:01.586402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.737 [2024-11-06 15:42:01.586447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.737 [2024-11-06 15:42:01.586460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.737 [2024-11-06 15:42:01.586467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.737 [2024-11-06 15:42:01.586473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.737 [2024-11-06 15:42:01.586486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 15:42:01.596558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.737 [2024-11-06 15:42:01.596608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.737 [2024-11-06 15:42:01.596621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.737 [2024-11-06 15:42:01.596632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.737 [2024-11-06 15:42:01.596639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.737 [2024-11-06 15:42:01.596652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 15:42:01.606549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.737 [2024-11-06 15:42:01.606597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.737 [2024-11-06 15:42:01.606610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.737 [2024-11-06 15:42:01.606617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.737 [2024-11-06 15:42:01.606623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.737 [2024-11-06 15:42:01.606636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 15:42:01.616586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.737 [2024-11-06 15:42:01.616650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.737 [2024-11-06 15:42:01.616663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.737 [2024-11-06 15:42:01.616670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.737 [2024-11-06 15:42:01.616677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.737 [2024-11-06 15:42:01.616690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 15:42:01.626671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.737 [2024-11-06 15:42:01.626752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.737 [2024-11-06 15:42:01.626766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.737 [2024-11-06 15:42:01.626772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.737 [2024-11-06 15:42:01.626778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.737 [2024-11-06 15:42:01.626792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 15:42:01.636619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.737 [2024-11-06 15:42:01.636668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.737 [2024-11-06 15:42:01.636681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.737 [2024-11-06 15:42:01.636688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.737 [2024-11-06 15:42:01.636694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.737 [2024-11-06 15:42:01.636707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 15:42:01.646659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.737 [2024-11-06 15:42:01.646709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.737 [2024-11-06 15:42:01.646724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.737 [2024-11-06 15:42:01.646731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.737 [2024-11-06 15:42:01.646738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.737 [2024-11-06 15:42:01.646763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 15:42:01.656737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.737 [2024-11-06 15:42:01.656835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.737 [2024-11-06 15:42:01.656849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.737 [2024-11-06 15:42:01.656856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.737 [2024-11-06 15:42:01.656863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.737 [2024-11-06 15:42:01.656876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 15:42:01.666639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.737 [2024-11-06 15:42:01.666690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.737 [2024-11-06 15:42:01.666703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.737 [2024-11-06 15:42:01.666709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.737 [2024-11-06 15:42:01.666716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.737 [2024-11-06 15:42:01.666728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 15:42:01.676779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.737 [2024-11-06 15:42:01.676828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.737 [2024-11-06 15:42:01.676841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.737 [2024-11-06 15:42:01.676848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.737 [2024-11-06 15:42:01.676854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.737 [2024-11-06 15:42:01.676867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 15:42:01.686655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.737 [2024-11-06 15:42:01.686715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.737 [2024-11-06 15:42:01.686730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.737 [2024-11-06 15:42:01.686737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.737 [2024-11-06 15:42:01.686743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.737 [2024-11-06 15:42:01.686761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 15:42:01.696747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.737 [2024-11-06 15:42:01.696842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.737 [2024-11-06 15:42:01.696856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.737 [2024-11-06 15:42:01.696864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.737 [2024-11-06 15:42:01.696870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.737 [2024-11-06 15:42:01.696884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 15:42:01.706757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.737 [2024-11-06 15:42:01.706807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.737 [2024-11-06 15:42:01.706820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.738 [2024-11-06 15:42:01.706827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.738 [2024-11-06 15:42:01.706834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.738 [2024-11-06 15:42:01.706847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.999 [2024-11-06 15:42:01.716861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.999 [2024-11-06 15:42:01.716911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.999 [2024-11-06 15:42:01.716925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.999 [2024-11-06 15:42:01.716932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.999 [2024-11-06 15:42:01.716938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.999 [2024-11-06 15:42:01.716952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.999 qpair failed and we were unable to recover it. 00:29:43.999 [2024-11-06 15:42:01.726879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.999 [2024-11-06 15:42:01.726925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.999 [2024-11-06 15:42:01.726938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.999 [2024-11-06 15:42:01.726948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.999 [2024-11-06 15:42:01.726954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.999 [2024-11-06 15:42:01.726967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.999 qpair failed and we were unable to recover it. 00:29:43.999 [2024-11-06 15:42:01.736969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.999 [2024-11-06 15:42:01.737028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.999 [2024-11-06 15:42:01.737041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.999 [2024-11-06 15:42:01.737048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.999 [2024-11-06 15:42:01.737054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.999 [2024-11-06 15:42:01.737068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.999 qpair failed and we were unable to recover it. 00:29:43.999 [2024-11-06 15:42:01.746975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.999 [2024-11-06 15:42:01.747024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.999 [2024-11-06 15:42:01.747036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.999 [2024-11-06 15:42:01.747043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.999 [2024-11-06 15:42:01.747050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.999 [2024-11-06 15:42:01.747063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.999 qpair failed and we were unable to recover it. 00:29:43.999 [2024-11-06 15:42:01.757001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.999 [2024-11-06 15:42:01.757056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.999 [2024-11-06 15:42:01.757069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.999 [2024-11-06 15:42:01.757076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.999 [2024-11-06 15:42:01.757082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.999 [2024-11-06 15:42:01.757095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.999 qpair failed and we were unable to recover it. 00:29:43.999 [2024-11-06 15:42:01.766997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.999 [2024-11-06 15:42:01.767043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.999 [2024-11-06 15:42:01.767056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.999 [2024-11-06 15:42:01.767062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.999 [2024-11-06 15:42:01.767069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.999 [2024-11-06 15:42:01.767082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.999 qpair failed and we were unable to recover it. 00:29:43.999 [2024-11-06 15:42:01.777050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.999 [2024-11-06 15:42:01.777102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.999 [2024-11-06 15:42:01.777115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.999 [2024-11-06 15:42:01.777122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.999 [2024-11-06 15:42:01.777128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.999 [2024-11-06 15:42:01.777140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.999 qpair failed and we were unable to recover it. 00:29:43.999 [2024-11-06 15:42:01.787082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.999 [2024-11-06 15:42:01.787164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.999 [2024-11-06 15:42:01.787177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.999 [2024-11-06 15:42:01.787184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.999 [2024-11-06 15:42:01.787190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.999 [2024-11-06 15:42:01.787203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.999 qpair failed and we were unable to recover it. 00:29:43.999 [2024-11-06 15:42:01.797078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.999 [2024-11-06 15:42:01.797121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.999 [2024-11-06 15:42:01.797133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.999 [2024-11-06 15:42:01.797140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.999 [2024-11-06 15:42:01.797146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.999 [2024-11-06 15:42:01.797159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.999 qpair failed and we were unable to recover it. 00:29:43.999 [2024-11-06 15:42:01.807097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.999 [2024-11-06 15:42:01.807150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.999 [2024-11-06 15:42:01.807162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.999 [2024-11-06 15:42:01.807169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.999 [2024-11-06 15:42:01.807176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:43.999 [2024-11-06 15:42:01.807188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.999 qpair failed and we were unable to recover it. 00:29:44.000 [2024-11-06 15:42:01.817180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.000 [2024-11-06 15:42:01.817226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.000 [2024-11-06 15:42:01.817243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.000 [2024-11-06 15:42:01.817249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.000 [2024-11-06 15:42:01.817255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.000 [2024-11-06 15:42:01.817268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.000 qpair failed and we were unable to recover it. 00:29:44.000 [2024-11-06 15:42:01.827191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.000 [2024-11-06 15:42:01.827238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.000 [2024-11-06 15:42:01.827251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.000 [2024-11-06 15:42:01.827258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.000 [2024-11-06 15:42:01.827264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.000 [2024-11-06 15:42:01.827277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.000 qpair failed and we were unable to recover it. 00:29:44.000 [2024-11-06 15:42:01.837213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.000 [2024-11-06 15:42:01.837258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.000 [2024-11-06 15:42:01.837271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.000 [2024-11-06 15:42:01.837277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.000 [2024-11-06 15:42:01.837284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.000 [2024-11-06 15:42:01.837296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.000 qpair failed and we were unable to recover it. 00:29:44.000 [2024-11-06 15:42:01.847198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.000 [2024-11-06 15:42:01.847243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.000 [2024-11-06 15:42:01.847256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.000 [2024-11-06 15:42:01.847263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.000 [2024-11-06 15:42:01.847269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.000 [2024-11-06 15:42:01.847282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.000 qpair failed and we were unable to recover it. 00:29:44.000 [2024-11-06 15:42:01.857353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.000 [2024-11-06 15:42:01.857405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.000 [2024-11-06 15:42:01.857418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.000 [2024-11-06 15:42:01.857428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.000 [2024-11-06 15:42:01.857434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.000 [2024-11-06 15:42:01.857448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.000 qpair failed and we were unable to recover it. 00:29:44.000 [2024-11-06 15:42:01.867311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.000 [2024-11-06 15:42:01.867358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.000 [2024-11-06 15:42:01.867371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.000 [2024-11-06 15:42:01.867379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.000 [2024-11-06 15:42:01.867386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.000 [2024-11-06 15:42:01.867399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.000 qpair failed and we were unable to recover it. 00:29:44.000 [2024-11-06 15:42:01.877359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.000 [2024-11-06 15:42:01.877406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.000 [2024-11-06 15:42:01.877419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.000 [2024-11-06 15:42:01.877425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.000 [2024-11-06 15:42:01.877432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.000 [2024-11-06 15:42:01.877444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.000 qpair failed and we were unable to recover it. 00:29:44.000 [2024-11-06 15:42:01.887314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.000 [2024-11-06 15:42:01.887361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.000 [2024-11-06 15:42:01.887373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.000 [2024-11-06 15:42:01.887380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.000 [2024-11-06 15:42:01.887386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.000 [2024-11-06 15:42:01.887399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.000 qpair failed and we were unable to recover it. 00:29:44.000 [2024-11-06 15:42:01.897299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.000 [2024-11-06 15:42:01.897352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.000 [2024-11-06 15:42:01.897365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.000 [2024-11-06 15:42:01.897372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.000 [2024-11-06 15:42:01.897378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.000 [2024-11-06 15:42:01.897391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.000 qpair failed and we were unable to recover it. 00:29:44.000 [2024-11-06 15:42:01.907391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.000 [2024-11-06 15:42:01.907436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.000 [2024-11-06 15:42:01.907449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.000 [2024-11-06 15:42:01.907455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.000 [2024-11-06 15:42:01.907462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.000 [2024-11-06 15:42:01.907474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.000 qpair failed and we were unable to recover it. 00:29:44.000 [2024-11-06 15:42:01.917420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.000 [2024-11-06 15:42:01.917469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.000 [2024-11-06 15:42:01.917482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.000 [2024-11-06 15:42:01.917488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.000 [2024-11-06 15:42:01.917494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.000 [2024-11-06 15:42:01.917507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.000 qpair failed and we were unable to recover it. 00:29:44.000 [2024-11-06 15:42:01.927410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.000 [2024-11-06 15:42:01.927461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.000 [2024-11-06 15:42:01.927486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.000 [2024-11-06 15:42:01.927494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.000 [2024-11-06 15:42:01.927501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.000 [2024-11-06 15:42:01.927519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.000 qpair failed and we were unable to recover it. 00:29:44.000 [2024-11-06 15:42:01.937364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.000 [2024-11-06 15:42:01.937413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.000 [2024-11-06 15:42:01.937428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.000 [2024-11-06 15:42:01.937435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.000 [2024-11-06 15:42:01.937441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.000 [2024-11-06 15:42:01.937456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.000 qpair failed and we were unable to recover it. 00:29:44.000 [2024-11-06 15:42:01.947377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.001 [2024-11-06 15:42:01.947431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.001 [2024-11-06 15:42:01.947445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.001 [2024-11-06 15:42:01.947452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.001 [2024-11-06 15:42:01.947458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.001 [2024-11-06 15:42:01.947472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.001 qpair failed and we were unable to recover it. 00:29:44.001 [2024-11-06 15:42:01.957536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.001 [2024-11-06 15:42:01.957591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.001 [2024-11-06 15:42:01.957615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.001 [2024-11-06 15:42:01.957623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.001 [2024-11-06 15:42:01.957630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.001 [2024-11-06 15:42:01.957649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.001 qpair failed and we were unable to recover it. 00:29:44.001 [2024-11-06 15:42:01.967518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.001 [2024-11-06 15:42:01.967565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.001 [2024-11-06 15:42:01.967580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.001 [2024-11-06 15:42:01.967587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.001 [2024-11-06 15:42:01.967593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.001 [2024-11-06 15:42:01.967607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.001 qpair failed and we were unable to recover it. 00:29:44.001 [2024-11-06 15:42:01.977602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.001 [2024-11-06 15:42:01.977651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.001 [2024-11-06 15:42:01.977665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.001 [2024-11-06 15:42:01.977671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.001 [2024-11-06 15:42:01.977678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.001 [2024-11-06 15:42:01.977692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.001 qpair failed and we were unable to recover it. 00:29:44.263 [2024-11-06 15:42:01.987579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.263 [2024-11-06 15:42:01.987629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.263 [2024-11-06 15:42:01.987643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.263 [2024-11-06 15:42:01.987654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.263 [2024-11-06 15:42:01.987661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.263 [2024-11-06 15:42:01.987674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.263 qpair failed and we were unable to recover it. 00:29:44.263 [2024-11-06 15:42:01.997634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.263 [2024-11-06 15:42:01.997688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.263 [2024-11-06 15:42:01.997701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.263 [2024-11-06 15:42:01.997708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.263 [2024-11-06 15:42:01.997714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.263 [2024-11-06 15:42:01.997727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.263 qpair failed and we were unable to recover it. 00:29:44.263 [2024-11-06 15:42:02.007588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.263 [2024-11-06 15:42:02.007633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.263 [2024-11-06 15:42:02.007648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.263 [2024-11-06 15:42:02.007655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.263 [2024-11-06 15:42:02.007662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.263 [2024-11-06 15:42:02.007676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.263 qpair failed and we were unable to recover it. 00:29:44.263 [2024-11-06 15:42:02.017701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.263 [2024-11-06 15:42:02.017757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.263 [2024-11-06 15:42:02.017770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.263 [2024-11-06 15:42:02.017777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.263 [2024-11-06 15:42:02.017783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.263 [2024-11-06 15:42:02.017797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.263 qpair failed and we were unable to recover it. 00:29:44.263 [2024-11-06 15:42:02.027718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.263 [2024-11-06 15:42:02.027771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.263 [2024-11-06 15:42:02.027784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.263 [2024-11-06 15:42:02.027799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.263 [2024-11-06 15:42:02.027806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.263 [2024-11-06 15:42:02.027819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.263 qpair failed and we were unable to recover it. 00:29:44.263 [2024-11-06 15:42:02.037614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.263 [2024-11-06 15:42:02.037662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.263 [2024-11-06 15:42:02.037675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.263 [2024-11-06 15:42:02.037682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.263 [2024-11-06 15:42:02.037688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.263 [2024-11-06 15:42:02.037701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.263 qpair failed and we were unable to recover it. 00:29:44.263 [2024-11-06 15:42:02.047739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.263 [2024-11-06 15:42:02.047790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.263 [2024-11-06 15:42:02.047803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.263 [2024-11-06 15:42:02.047809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.263 [2024-11-06 15:42:02.047816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.263 [2024-11-06 15:42:02.047829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.263 qpair failed and we were unable to recover it. 00:29:44.263 [2024-11-06 15:42:02.057813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.263 [2024-11-06 15:42:02.057864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.263 [2024-11-06 15:42:02.057877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.263 [2024-11-06 15:42:02.057884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.263 [2024-11-06 15:42:02.057890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.263 [2024-11-06 15:42:02.057903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.263 qpair failed and we were unable to recover it. 00:29:44.263 [2024-11-06 15:42:02.067815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.263 [2024-11-06 15:42:02.067871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.263 [2024-11-06 15:42:02.067884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.263 [2024-11-06 15:42:02.067891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.263 [2024-11-06 15:42:02.067897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.263 [2024-11-06 15:42:02.067910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.263 qpair failed and we were unable to recover it. 00:29:44.263 [2024-11-06 15:42:02.077888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.263 [2024-11-06 15:42:02.077936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.263 [2024-11-06 15:42:02.077950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.263 [2024-11-06 15:42:02.077957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.263 [2024-11-06 15:42:02.077963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.263 [2024-11-06 15:42:02.077976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.263 qpair failed and we were unable to recover it. 00:29:44.263 [2024-11-06 15:42:02.087833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.263 [2024-11-06 15:42:02.087896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.263 [2024-11-06 15:42:02.087910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.263 [2024-11-06 15:42:02.087917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.263 [2024-11-06 15:42:02.087923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.263 [2024-11-06 15:42:02.087937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.263 qpair failed and we were unable to recover it. 00:29:44.263 [2024-11-06 15:42:02.097863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.263 [2024-11-06 15:42:02.097927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.263 [2024-11-06 15:42:02.097939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.263 [2024-11-06 15:42:02.097946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.263 [2024-11-06 15:42:02.097952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.263 [2024-11-06 15:42:02.097965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.263 qpair failed and we were unable to recover it. 00:29:44.264 [2024-11-06 15:42:02.107839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.264 [2024-11-06 15:42:02.107913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.264 [2024-11-06 15:42:02.107925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.264 [2024-11-06 15:42:02.107932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.264 [2024-11-06 15:42:02.107938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.264 [2024-11-06 15:42:02.107951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.264 qpair failed and we were unable to recover it. 00:29:44.264 [2024-11-06 15:42:02.117973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.264 [2024-11-06 15:42:02.118038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.264 [2024-11-06 15:42:02.118051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.264 [2024-11-06 15:42:02.118061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.264 [2024-11-06 15:42:02.118069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.264 [2024-11-06 15:42:02.118082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.264 qpair failed and we were unable to recover it. 00:29:44.264 [2024-11-06 15:42:02.127938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.264 [2024-11-06 15:42:02.127984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.264 [2024-11-06 15:42:02.127996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.264 [2024-11-06 15:42:02.128003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.264 [2024-11-06 15:42:02.128009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.264 [2024-11-06 15:42:02.128022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.264 qpair failed and we were unable to recover it. 00:29:44.264 [2024-11-06 15:42:02.138038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.264 [2024-11-06 15:42:02.138089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.264 [2024-11-06 15:42:02.138103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.264 [2024-11-06 15:42:02.138109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.264 [2024-11-06 15:42:02.138116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.264 [2024-11-06 15:42:02.138129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.264 qpair failed and we were unable to recover it. 00:29:44.264 [2024-11-06 15:42:02.147954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.264 [2024-11-06 15:42:02.148001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.264 [2024-11-06 15:42:02.148014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.264 [2024-11-06 15:42:02.148021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.264 [2024-11-06 15:42:02.148027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.264 [2024-11-06 15:42:02.148040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.264 qpair failed and we were unable to recover it. 00:29:44.264 [2024-11-06 15:42:02.158075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.264 [2024-11-06 15:42:02.158118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.264 [2024-11-06 15:42:02.158130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.264 [2024-11-06 15:42:02.158137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.264 [2024-11-06 15:42:02.158143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.264 [2024-11-06 15:42:02.158160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.264 qpair failed and we were unable to recover it. 00:29:44.264 [2024-11-06 15:42:02.167927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.264 [2024-11-06 15:42:02.167973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.264 [2024-11-06 15:42:02.167986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.264 [2024-11-06 15:42:02.167993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.264 [2024-11-06 15:42:02.167999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.264 [2024-11-06 15:42:02.168012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.264 qpair failed and we were unable to recover it. 00:29:44.264 [2024-11-06 15:42:02.178113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.264 [2024-11-06 15:42:02.178169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.264 [2024-11-06 15:42:02.178182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.264 [2024-11-06 15:42:02.178189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.264 [2024-11-06 15:42:02.178195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.264 [2024-11-06 15:42:02.178207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.264 qpair failed and we were unable to recover it. 00:29:44.264 [2024-11-06 15:42:02.188149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.264 [2024-11-06 15:42:02.188202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.264 [2024-11-06 15:42:02.188215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.264 [2024-11-06 15:42:02.188222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.264 [2024-11-06 15:42:02.188228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.264 [2024-11-06 15:42:02.188240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.264 qpair failed and we were unable to recover it. 00:29:44.264 [2024-11-06 15:42:02.198166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.264 [2024-11-06 15:42:02.198215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.264 [2024-11-06 15:42:02.198227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.264 [2024-11-06 15:42:02.198234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.264 [2024-11-06 15:42:02.198240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.264 [2024-11-06 15:42:02.198253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.264 qpair failed and we were unable to recover it. 00:29:44.264 [2024-11-06 15:42:02.208163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.264 [2024-11-06 15:42:02.208211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.281 [2024-11-06 15:42:02.208225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.281 [2024-11-06 15:42:02.208231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.281 [2024-11-06 15:42:02.208238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.281 [2024-11-06 15:42:02.208250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.281 qpair failed and we were unable to recover it. 00:29:44.281 [2024-11-06 15:42:02.218245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.281 [2024-11-06 15:42:02.218307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.281 [2024-11-06 15:42:02.218320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.281 [2024-11-06 15:42:02.218327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.281 [2024-11-06 15:42:02.218333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.281 [2024-11-06 15:42:02.218346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.281 qpair failed and we were unable to recover it. 00:29:44.281 [2024-11-06 15:42:02.228261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.281 [2024-11-06 15:42:02.228312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.281 [2024-11-06 15:42:02.228325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.281 [2024-11-06 15:42:02.228332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.281 [2024-11-06 15:42:02.228338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.281 [2024-11-06 15:42:02.228351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.281 qpair failed and we were unable to recover it. 00:29:44.281 [2024-11-06 15:42:02.238293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.281 [2024-11-06 15:42:02.238345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.281 [2024-11-06 15:42:02.238359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.281 [2024-11-06 15:42:02.238365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.281 [2024-11-06 15:42:02.238372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.281 [2024-11-06 15:42:02.238385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.281 qpair failed and we were unable to recover it. 00:29:44.543 [2024-11-06 15:42:02.248288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.543 [2024-11-06 15:42:02.248336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.543 [2024-11-06 15:42:02.248349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.543 [2024-11-06 15:42:02.248359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.543 [2024-11-06 15:42:02.248365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.543 [2024-11-06 15:42:02.248378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.543 qpair failed and we were unable to recover it. 00:29:44.543 [2024-11-06 15:42:02.258326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.543 [2024-11-06 15:42:02.258380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.543 [2024-11-06 15:42:02.258393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.543 [2024-11-06 15:42:02.258399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.543 [2024-11-06 15:42:02.258406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.543 [2024-11-06 15:42:02.258419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.543 qpair failed and we were unable to recover it. 00:29:44.543 [2024-11-06 15:42:02.268367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.543 [2024-11-06 15:42:02.268417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.543 [2024-11-06 15:42:02.268431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.543 [2024-11-06 15:42:02.268438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.543 [2024-11-06 15:42:02.268444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.543 [2024-11-06 15:42:02.268458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.543 qpair failed and we were unable to recover it. 00:29:44.543 [2024-11-06 15:42:02.278452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.543 [2024-11-06 15:42:02.278497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.543 [2024-11-06 15:42:02.278510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.543 [2024-11-06 15:42:02.278516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.543 [2024-11-06 15:42:02.278523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.543 [2024-11-06 15:42:02.278536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.543 qpair failed and we were unable to recover it. 00:29:44.543 [2024-11-06 15:42:02.288413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.543 [2024-11-06 15:42:02.288462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.543 [2024-11-06 15:42:02.288487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.543 [2024-11-06 15:42:02.288495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.543 [2024-11-06 15:42:02.288503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.543 [2024-11-06 15:42:02.288529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.543 qpair failed and we were unable to recover it. 00:29:44.544 [2024-11-06 15:42:02.298449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.544 [2024-11-06 15:42:02.298510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.544 [2024-11-06 15:42:02.298535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.544 [2024-11-06 15:42:02.298543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.544 [2024-11-06 15:42:02.298550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.544 [2024-11-06 15:42:02.298569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.544 qpair failed and we were unable to recover it. 00:29:44.544 [2024-11-06 15:42:02.308452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.544 [2024-11-06 15:42:02.308518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.544 [2024-11-06 15:42:02.308542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.544 [2024-11-06 15:42:02.308550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.544 [2024-11-06 15:42:02.308557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.544 [2024-11-06 15:42:02.308576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.544 qpair failed and we were unable to recover it. 00:29:44.544 [2024-11-06 15:42:02.318606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.544 [2024-11-06 15:42:02.318666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.544 [2024-11-06 15:42:02.318681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.544 [2024-11-06 15:42:02.318688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.544 [2024-11-06 15:42:02.318694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.544 [2024-11-06 15:42:02.318708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.544 qpair failed and we were unable to recover it. 00:29:44.544 [2024-11-06 15:42:02.328534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.544 [2024-11-06 15:42:02.328581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.544 [2024-11-06 15:42:02.328594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.544 [2024-11-06 15:42:02.328601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.544 [2024-11-06 15:42:02.328607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.544 [2024-11-06 15:42:02.328621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.544 qpair failed and we were unable to recover it. 00:29:44.544 [2024-11-06 15:42:02.338611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.544 [2024-11-06 15:42:02.338668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.544 [2024-11-06 15:42:02.338682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.544 [2024-11-06 15:42:02.338689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.544 [2024-11-06 15:42:02.338695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.544 [2024-11-06 15:42:02.338709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.544 qpair failed and we were unable to recover it. 00:29:44.544 [2024-11-06 15:42:02.348643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.544 [2024-11-06 15:42:02.348694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.544 [2024-11-06 15:42:02.348708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.544 [2024-11-06 15:42:02.348715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.544 [2024-11-06 15:42:02.348721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.544 [2024-11-06 15:42:02.348735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.544 qpair failed and we were unable to recover it. 00:29:44.544 [2024-11-06 15:42:02.358630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.544 [2024-11-06 15:42:02.358682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.544 [2024-11-06 15:42:02.358696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.544 [2024-11-06 15:42:02.358702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.544 [2024-11-06 15:42:02.358709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.544 [2024-11-06 15:42:02.358722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.544 qpair failed and we were unable to recover it. 00:29:44.544 [2024-11-06 15:42:02.368615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.544 [2024-11-06 15:42:02.368659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.544 [2024-11-06 15:42:02.368672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.544 [2024-11-06 15:42:02.368679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.544 [2024-11-06 15:42:02.368685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.544 [2024-11-06 15:42:02.368699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.544 qpair failed and we were unable to recover it. 00:29:44.544 [2024-11-06 15:42:02.378674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.544 [2024-11-06 15:42:02.378723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.544 [2024-11-06 15:42:02.378736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.544 [2024-11-06 15:42:02.378752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.544 [2024-11-06 15:42:02.378758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.544 [2024-11-06 15:42:02.378771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.544 qpair failed and we were unable to recover it. 00:29:44.544 [2024-11-06 15:42:02.388661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.544 [2024-11-06 15:42:02.388707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.544 [2024-11-06 15:42:02.388720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.544 [2024-11-06 15:42:02.388727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.544 [2024-11-06 15:42:02.388734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.544 [2024-11-06 15:42:02.388753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.544 qpair failed and we were unable to recover it. 00:29:44.544 [2024-11-06 15:42:02.398728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.544 [2024-11-06 15:42:02.398784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.544 [2024-11-06 15:42:02.398799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.544 [2024-11-06 15:42:02.398806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.544 [2024-11-06 15:42:02.398812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.544 [2024-11-06 15:42:02.398825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.544 qpair failed and we were unable to recover it. 00:29:44.544 [2024-11-06 15:42:02.408709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.544 [2024-11-06 15:42:02.408757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.544 [2024-11-06 15:42:02.408770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.544 [2024-11-06 15:42:02.408777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.544 [2024-11-06 15:42:02.408783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.544 [2024-11-06 15:42:02.408796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.544 qpair failed and we were unable to recover it. 00:29:44.544 [2024-11-06 15:42:02.418767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.544 [2024-11-06 15:42:02.418819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.544 [2024-11-06 15:42:02.418832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.544 [2024-11-06 15:42:02.418839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.544 [2024-11-06 15:42:02.418845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.544 [2024-11-06 15:42:02.418862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.544 qpair failed and we were unable to recover it. 00:29:44.544 [2024-11-06 15:42:02.428792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.544 [2024-11-06 15:42:02.428841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.545 [2024-11-06 15:42:02.428855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.545 [2024-11-06 15:42:02.428861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.545 [2024-11-06 15:42:02.428867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.545 [2024-11-06 15:42:02.428881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.545 qpair failed and we were unable to recover it. 00:29:44.545 [2024-11-06 15:42:02.438832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.545 [2024-11-06 15:42:02.438882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.545 [2024-11-06 15:42:02.438895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.545 [2024-11-06 15:42:02.438902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.545 [2024-11-06 15:42:02.438908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.545 [2024-11-06 15:42:02.438921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.545 qpair failed and we were unable to recover it. 00:29:44.545 [2024-11-06 15:42:02.448819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.545 [2024-11-06 15:42:02.448887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.545 [2024-11-06 15:42:02.448900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.545 [2024-11-06 15:42:02.448907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.545 [2024-11-06 15:42:02.448913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.545 [2024-11-06 15:42:02.448926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.545 qpair failed and we were unable to recover it. 00:29:44.545 [2024-11-06 15:42:02.458898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.545 [2024-11-06 15:42:02.458954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.545 [2024-11-06 15:42:02.458967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.545 [2024-11-06 15:42:02.458974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.545 [2024-11-06 15:42:02.458980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.545 [2024-11-06 15:42:02.458993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.545 qpair failed and we were unable to recover it. 00:29:44.545 [2024-11-06 15:42:02.468909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.545 [2024-11-06 15:42:02.468962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.545 [2024-11-06 15:42:02.468976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.545 [2024-11-06 15:42:02.468982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.545 [2024-11-06 15:42:02.468989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.545 [2024-11-06 15:42:02.469001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.545 qpair failed and we were unable to recover it. 00:29:44.545 [2024-11-06 15:42:02.478931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.545 [2024-11-06 15:42:02.479014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.545 [2024-11-06 15:42:02.479028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.545 [2024-11-06 15:42:02.479035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.545 [2024-11-06 15:42:02.479041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.545 [2024-11-06 15:42:02.479059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.545 qpair failed and we were unable to recover it. 00:29:44.545 [2024-11-06 15:42:02.488935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.545 [2024-11-06 15:42:02.488982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.545 [2024-11-06 15:42:02.488996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.545 [2024-11-06 15:42:02.489003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.545 [2024-11-06 15:42:02.489010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.545 [2024-11-06 15:42:02.489023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.545 qpair failed and we were unable to recover it. 00:29:44.545 [2024-11-06 15:42:02.499004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.545 [2024-11-06 15:42:02.499056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.545 [2024-11-06 15:42:02.499069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.545 [2024-11-06 15:42:02.499075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.545 [2024-11-06 15:42:02.499082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.545 [2024-11-06 15:42:02.499095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.545 qpair failed and we were unable to recover it. 00:29:44.545 [2024-11-06 15:42:02.508915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.545 [2024-11-06 15:42:02.508970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.545 [2024-11-06 15:42:02.508983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.545 [2024-11-06 15:42:02.508997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.545 [2024-11-06 15:42:02.509003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.545 [2024-11-06 15:42:02.509017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.545 qpair failed and we were unable to recover it. 00:29:44.545 [2024-11-06 15:42:02.519000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.545 [2024-11-06 15:42:02.519055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.545 [2024-11-06 15:42:02.519068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.545 [2024-11-06 15:42:02.519075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.545 [2024-11-06 15:42:02.519081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.545 [2024-11-06 15:42:02.519094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.545 qpair failed and we were unable to recover it. 00:29:44.807 [2024-11-06 15:42:02.529021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.808 [2024-11-06 15:42:02.529068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.808 [2024-11-06 15:42:02.529081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.808 [2024-11-06 15:42:02.529088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.808 [2024-11-06 15:42:02.529094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.808 [2024-11-06 15:42:02.529107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.808 qpair failed and we were unable to recover it. 00:29:44.808 [2024-11-06 15:42:02.539125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.808 [2024-11-06 15:42:02.539178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.808 [2024-11-06 15:42:02.539191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.808 [2024-11-06 15:42:02.539198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.808 [2024-11-06 15:42:02.539205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.808 [2024-11-06 15:42:02.539217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.808 qpair failed and we were unable to recover it. 00:29:44.808 [2024-11-06 15:42:02.549063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.808 [2024-11-06 15:42:02.549113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.808 [2024-11-06 15:42:02.549127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.808 [2024-11-06 15:42:02.549134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.808 [2024-11-06 15:42:02.549140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.808 [2024-11-06 15:42:02.549157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.808 qpair failed and we were unable to recover it. 00:29:44.808 [2024-11-06 15:42:02.558988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.808 [2024-11-06 15:42:02.559036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.808 [2024-11-06 15:42:02.559049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.808 [2024-11-06 15:42:02.559056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.808 [2024-11-06 15:42:02.559062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.808 [2024-11-06 15:42:02.559075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.808 qpair failed and we were unable to recover it. 00:29:44.808 [2024-11-06 15:42:02.569161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.808 [2024-11-06 15:42:02.569214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.808 [2024-11-06 15:42:02.569227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.808 [2024-11-06 15:42:02.569234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.808 [2024-11-06 15:42:02.569241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.808 [2024-11-06 15:42:02.569253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.808 qpair failed and we were unable to recover it. 00:29:44.808 [2024-11-06 15:42:02.579226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.808 [2024-11-06 15:42:02.579309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.808 [2024-11-06 15:42:02.579322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.808 [2024-11-06 15:42:02.579330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.808 [2024-11-06 15:42:02.579336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.808 [2024-11-06 15:42:02.579349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.808 qpair failed and we were unable to recover it. 00:29:44.808 [2024-11-06 15:42:02.589263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.808 [2024-11-06 15:42:02.589330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.808 [2024-11-06 15:42:02.589343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.808 [2024-11-06 15:42:02.589350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.808 [2024-11-06 15:42:02.589356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.808 [2024-11-06 15:42:02.589369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.808 qpair failed and we were unable to recover it. 00:29:44.808 [2024-11-06 15:42:02.599244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.808 [2024-11-06 15:42:02.599290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.808 [2024-11-06 15:42:02.599303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.808 [2024-11-06 15:42:02.599310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.808 [2024-11-06 15:42:02.599316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.808 [2024-11-06 15:42:02.599329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.808 qpair failed and we were unable to recover it. 00:29:44.808 [2024-11-06 15:42:02.609250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.808 [2024-11-06 15:42:02.609293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.808 [2024-11-06 15:42:02.609306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.808 [2024-11-06 15:42:02.609313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.808 [2024-11-06 15:42:02.609319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.808 [2024-11-06 15:42:02.609332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.808 qpair failed and we were unable to recover it. 00:29:44.808 [2024-11-06 15:42:02.619317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.808 [2024-11-06 15:42:02.619367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.808 [2024-11-06 15:42:02.619380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.808 [2024-11-06 15:42:02.619387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.808 [2024-11-06 15:42:02.619394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.808 [2024-11-06 15:42:02.619407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.808 qpair failed and we were unable to recover it. 00:29:44.808 [2024-11-06 15:42:02.629349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.808 [2024-11-06 15:42:02.629400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.808 [2024-11-06 15:42:02.629413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.808 [2024-11-06 15:42:02.629420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.808 [2024-11-06 15:42:02.629426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.808 [2024-11-06 15:42:02.629439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.808 qpair failed and we were unable to recover it. 00:29:44.808 [2024-11-06 15:42:02.639338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.808 [2024-11-06 15:42:02.639387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.808 [2024-11-06 15:42:02.639399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.808 [2024-11-06 15:42:02.639409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.808 [2024-11-06 15:42:02.639416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.808 [2024-11-06 15:42:02.639429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.808 qpair failed and we were unable to recover it. 00:29:44.808 [2024-11-06 15:42:02.649366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.808 [2024-11-06 15:42:02.649413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.808 [2024-11-06 15:42:02.649426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.808 [2024-11-06 15:42:02.649433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.809 [2024-11-06 15:42:02.649440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.809 [2024-11-06 15:42:02.649453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.809 qpair failed and we were unable to recover it. 00:29:44.809 [2024-11-06 15:42:02.659445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.809 [2024-11-06 15:42:02.659528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.809 [2024-11-06 15:42:02.659541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.809 [2024-11-06 15:42:02.659548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.809 [2024-11-06 15:42:02.659554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.809 [2024-11-06 15:42:02.659567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.809 qpair failed and we were unable to recover it. 00:29:44.809 [2024-11-06 15:42:02.669428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.809 [2024-11-06 15:42:02.669483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.809 [2024-11-06 15:42:02.669508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.809 [2024-11-06 15:42:02.669516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.809 [2024-11-06 15:42:02.669523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.809 [2024-11-06 15:42:02.669542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.809 qpair failed and we were unable to recover it. 00:29:44.809 [2024-11-06 15:42:02.679430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.809 [2024-11-06 15:42:02.679478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.809 [2024-11-06 15:42:02.679502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.809 [2024-11-06 15:42:02.679511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.809 [2024-11-06 15:42:02.679518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.809 [2024-11-06 15:42:02.679541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.809 qpair failed and we were unable to recover it. 00:29:44.809 [2024-11-06 15:42:02.689461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.809 [2024-11-06 15:42:02.689513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.809 [2024-11-06 15:42:02.689538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.809 [2024-11-06 15:42:02.689547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.809 [2024-11-06 15:42:02.689554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.809 [2024-11-06 15:42:02.689572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.809 qpair failed and we were unable to recover it. 00:29:44.809 [2024-11-06 15:42:02.699559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.809 [2024-11-06 15:42:02.699612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.809 [2024-11-06 15:42:02.699636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.809 [2024-11-06 15:42:02.699645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.809 [2024-11-06 15:42:02.699652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.809 [2024-11-06 15:42:02.699671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.809 qpair failed and we were unable to recover it. 00:29:44.809 [2024-11-06 15:42:02.709535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.809 [2024-11-06 15:42:02.709578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.809 [2024-11-06 15:42:02.709593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.809 [2024-11-06 15:42:02.709600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.809 [2024-11-06 15:42:02.709606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.809 [2024-11-06 15:42:02.709620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.809 qpair failed and we were unable to recover it. 00:29:44.809 [2024-11-06 15:42:02.719558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.809 [2024-11-06 15:42:02.719604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.809 [2024-11-06 15:42:02.719619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.809 [2024-11-06 15:42:02.719626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.809 [2024-11-06 15:42:02.719632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.809 [2024-11-06 15:42:02.719646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.809 qpair failed and we were unable to recover it. 00:29:44.809 [2024-11-06 15:42:02.729543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.809 [2024-11-06 15:42:02.729594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.809 [2024-11-06 15:42:02.729608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.809 [2024-11-06 15:42:02.729615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.809 [2024-11-06 15:42:02.729621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.809 [2024-11-06 15:42:02.729635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.809 qpair failed and we were unable to recover it. 00:29:44.809 [2024-11-06 15:42:02.739669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.809 [2024-11-06 15:42:02.739720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.809 [2024-11-06 15:42:02.739733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.809 [2024-11-06 15:42:02.739740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.809 [2024-11-06 15:42:02.739750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.809 [2024-11-06 15:42:02.739764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.809 qpair failed and we were unable to recover it. 00:29:44.809 [2024-11-06 15:42:02.749624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.809 [2024-11-06 15:42:02.749668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.809 [2024-11-06 15:42:02.749680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.809 [2024-11-06 15:42:02.749687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.809 [2024-11-06 15:42:02.749694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.809 [2024-11-06 15:42:02.749706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.809 qpair failed and we were unable to recover it. 00:29:44.809 [2024-11-06 15:42:02.759664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.809 [2024-11-06 15:42:02.759726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.809 [2024-11-06 15:42:02.759741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.809 [2024-11-06 15:42:02.759752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.809 [2024-11-06 15:42:02.759758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.809 [2024-11-06 15:42:02.759772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.809 qpair failed and we were unable to recover it. 00:29:44.809 [2024-11-06 15:42:02.769704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.809 [2024-11-06 15:42:02.769755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.809 [2024-11-06 15:42:02.769769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.809 [2024-11-06 15:42:02.769780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.809 [2024-11-06 15:42:02.769787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.809 [2024-11-06 15:42:02.769800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.809 qpair failed and we were unable to recover it. 00:29:44.809 [2024-11-06 15:42:02.779724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.809 [2024-11-06 15:42:02.779798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.809 [2024-11-06 15:42:02.779812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.809 [2024-11-06 15:42:02.779819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.809 [2024-11-06 15:42:02.779825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:44.810 [2024-11-06 15:42:02.779838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.810 qpair failed and we were unable to recover it. 00:29:45.071 [2024-11-06 15:42:02.789613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.071 [2024-11-06 15:42:02.789659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.071 [2024-11-06 15:42:02.789673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.071 [2024-11-06 15:42:02.789680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.071 [2024-11-06 15:42:02.789686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.071 [2024-11-06 15:42:02.789700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.071 qpair failed and we were unable to recover it. 00:29:45.071 [2024-11-06 15:42:02.799789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.071 [2024-11-06 15:42:02.799839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.071 [2024-11-06 15:42:02.799852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.071 [2024-11-06 15:42:02.799858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.071 [2024-11-06 15:42:02.799865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.071 [2024-11-06 15:42:02.799878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.071 qpair failed and we were unable to recover it. 00:29:45.071 [2024-11-06 15:42:02.809787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.071 [2024-11-06 15:42:02.809840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.071 [2024-11-06 15:42:02.809853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.071 [2024-11-06 15:42:02.809860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.071 [2024-11-06 15:42:02.809866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.071 [2024-11-06 15:42:02.809882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.071 qpair failed and we were unable to recover it. 00:29:45.071 [2024-11-06 15:42:02.819743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.071 [2024-11-06 15:42:02.819808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.071 [2024-11-06 15:42:02.819821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.071 [2024-11-06 15:42:02.819828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.071 [2024-11-06 15:42:02.819834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.071 [2024-11-06 15:42:02.819847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.071 qpair failed and we were unable to recover it. 00:29:45.071 [2024-11-06 15:42:02.829710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.071 [2024-11-06 15:42:02.829755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.071 [2024-11-06 15:42:02.829769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.071 [2024-11-06 15:42:02.829775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.071 [2024-11-06 15:42:02.829782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.071 [2024-11-06 15:42:02.829796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.071 qpair failed and we were unable to recover it. 00:29:45.071 [2024-11-06 15:42:02.839770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.071 [2024-11-06 15:42:02.839814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.071 [2024-11-06 15:42:02.839828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.071 [2024-11-06 15:42:02.839834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.071 [2024-11-06 15:42:02.839840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.071 [2024-11-06 15:42:02.839854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.071 qpair failed and we were unable to recover it. 00:29:45.071 [2024-11-06 15:42:02.849900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.071 [2024-11-06 15:42:02.849945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.071 [2024-11-06 15:42:02.849958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.071 [2024-11-06 15:42:02.849964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.071 [2024-11-06 15:42:02.849971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.071 [2024-11-06 15:42:02.849984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.071 qpair failed and we were unable to recover it. 00:29:45.071 [2024-11-06 15:42:02.859981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.071 [2024-11-06 15:42:02.860038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.071 [2024-11-06 15:42:02.860052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.071 [2024-11-06 15:42:02.860058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.071 [2024-11-06 15:42:02.860065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.071 [2024-11-06 15:42:02.860078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.071 qpair failed and we were unable to recover it. 00:29:45.071 [2024-11-06 15:42:02.869929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.071 [2024-11-06 15:42:02.869977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.071 [2024-11-06 15:42:02.869990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.071 [2024-11-06 15:42:02.869996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.071 [2024-11-06 15:42:02.870003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.071 [2024-11-06 15:42:02.870015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.071 qpair failed and we were unable to recover it. 00:29:45.071 [2024-11-06 15:42:02.879988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.071 [2024-11-06 15:42:02.880030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.071 [2024-11-06 15:42:02.880043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.071 [2024-11-06 15:42:02.880050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.071 [2024-11-06 15:42:02.880056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.071 [2024-11-06 15:42:02.880069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.071 qpair failed and we were unable to recover it. 00:29:45.071 [2024-11-06 15:42:02.890007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.071 [2024-11-06 15:42:02.890054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.071 [2024-11-06 15:42:02.890067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.071 [2024-11-06 15:42:02.890074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.071 [2024-11-06 15:42:02.890080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.072 [2024-11-06 15:42:02.890093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.072 qpair failed and we were unable to recover it. 00:29:45.072 [2024-11-06 15:42:02.900071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.072 [2024-11-06 15:42:02.900122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.072 [2024-11-06 15:42:02.900135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.072 [2024-11-06 15:42:02.900146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.072 [2024-11-06 15:42:02.900153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.072 [2024-11-06 15:42:02.900166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.072 qpair failed and we were unable to recover it. 00:29:45.072 [2024-11-06 15:42:02.910042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.072 [2024-11-06 15:42:02.910084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.072 [2024-11-06 15:42:02.910097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.072 [2024-11-06 15:42:02.910103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.072 [2024-11-06 15:42:02.910109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.072 [2024-11-06 15:42:02.910122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.072 qpair failed and we were unable to recover it. 00:29:45.072 [2024-11-06 15:42:02.920063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.072 [2024-11-06 15:42:02.920107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.072 [2024-11-06 15:42:02.920120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.072 [2024-11-06 15:42:02.920127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.072 [2024-11-06 15:42:02.920133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.072 [2024-11-06 15:42:02.920146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.072 qpair failed and we were unable to recover it. 00:29:45.072 [2024-11-06 15:42:02.930145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.072 [2024-11-06 15:42:02.930194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.072 [2024-11-06 15:42:02.930207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.072 [2024-11-06 15:42:02.930214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.072 [2024-11-06 15:42:02.930221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.072 [2024-11-06 15:42:02.930234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.072 qpair failed and we were unable to recover it. 00:29:45.072 [2024-11-06 15:42:02.940195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.072 [2024-11-06 15:42:02.940246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.072 [2024-11-06 15:42:02.940259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.072 [2024-11-06 15:42:02.940266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.072 [2024-11-06 15:42:02.940273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.072 [2024-11-06 15:42:02.940289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.072 qpair failed and we were unable to recover it. 00:29:45.072 [2024-11-06 15:42:02.950145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.072 [2024-11-06 15:42:02.950233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.072 [2024-11-06 15:42:02.950246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.072 [2024-11-06 15:42:02.950253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.072 [2024-11-06 15:42:02.950259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.072 [2024-11-06 15:42:02.950272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.072 qpair failed and we were unable to recover it. 00:29:45.072 [2024-11-06 15:42:02.960209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.072 [2024-11-06 15:42:02.960251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.072 [2024-11-06 15:42:02.960264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.072 [2024-11-06 15:42:02.960271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.072 [2024-11-06 15:42:02.960277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.072 [2024-11-06 15:42:02.960290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.072 qpair failed and we were unable to recover it. 00:29:45.072 [2024-11-06 15:42:02.970211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.072 [2024-11-06 15:42:02.970256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.072 [2024-11-06 15:42:02.970270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.072 [2024-11-06 15:42:02.970277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.072 [2024-11-06 15:42:02.970283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.072 [2024-11-06 15:42:02.970296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.072 qpair failed and we were unable to recover it. 00:29:45.072 [2024-11-06 15:42:02.980347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.072 [2024-11-06 15:42:02.980417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.072 [2024-11-06 15:42:02.980430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.072 [2024-11-06 15:42:02.980437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.072 [2024-11-06 15:42:02.980443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.072 [2024-11-06 15:42:02.980456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.072 qpair failed and we were unable to recover it. 00:29:45.072 [2024-11-06 15:42:02.990237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.072 [2024-11-06 15:42:02.990284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.072 [2024-11-06 15:42:02.990297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.072 [2024-11-06 15:42:02.990304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.072 [2024-11-06 15:42:02.990311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.072 [2024-11-06 15:42:02.990324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.072 qpair failed and we were unable to recover it. 00:29:45.072 [2024-11-06 15:42:03.000277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.072 [2024-11-06 15:42:03.000326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.072 [2024-11-06 15:42:03.000339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.072 [2024-11-06 15:42:03.000346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.072 [2024-11-06 15:42:03.000352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.072 [2024-11-06 15:42:03.000365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.072 qpair failed and we were unable to recover it. 00:29:45.072 [2024-11-06 15:42:03.010331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.072 [2024-11-06 15:42:03.010390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.072 [2024-11-06 15:42:03.010404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.072 [2024-11-06 15:42:03.010411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.072 [2024-11-06 15:42:03.010418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.072 [2024-11-06 15:42:03.010431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.072 qpair failed and we were unable to recover it. 00:29:45.072 [2024-11-06 15:42:03.020276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.072 [2024-11-06 15:42:03.020324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.072 [2024-11-06 15:42:03.020337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.072 [2024-11-06 15:42:03.020343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.072 [2024-11-06 15:42:03.020350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.072 [2024-11-06 15:42:03.020363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.073 qpair failed and we were unable to recover it. 00:29:45.073 [2024-11-06 15:42:03.030250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.073 [2024-11-06 15:42:03.030334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.073 [2024-11-06 15:42:03.030347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.073 [2024-11-06 15:42:03.030358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.073 [2024-11-06 15:42:03.030364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.073 [2024-11-06 15:42:03.030377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.073 qpair failed and we were unable to recover it. 00:29:45.073 [2024-11-06 15:42:03.040399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.073 [2024-11-06 15:42:03.040439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.073 [2024-11-06 15:42:03.040452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.073 [2024-11-06 15:42:03.040459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.073 [2024-11-06 15:42:03.040465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.073 [2024-11-06 15:42:03.040478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.073 qpair failed and we were unable to recover it. 00:29:45.073 [2024-11-06 15:42:03.050314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.073 [2024-11-06 15:42:03.050363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.073 [2024-11-06 15:42:03.050377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.073 [2024-11-06 15:42:03.050384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.073 [2024-11-06 15:42:03.050391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.073 [2024-11-06 15:42:03.050405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.073 qpair failed and we were unable to recover it. 00:29:45.334 [2024-11-06 15:42:03.060520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.334 [2024-11-06 15:42:03.060572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.334 [2024-11-06 15:42:03.060585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.334 [2024-11-06 15:42:03.060593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.334 [2024-11-06 15:42:03.060599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.334 [2024-11-06 15:42:03.060612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.334 qpair failed and we were unable to recover it. 00:29:45.334 [2024-11-06 15:42:03.070493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.334 [2024-11-06 15:42:03.070570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.334 [2024-11-06 15:42:03.070583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.334 [2024-11-06 15:42:03.070590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.334 [2024-11-06 15:42:03.070597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.334 [2024-11-06 15:42:03.070613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.334 qpair failed and we were unable to recover it. 00:29:45.334 [2024-11-06 15:42:03.080389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.334 [2024-11-06 15:42:03.080434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.334 [2024-11-06 15:42:03.080447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.334 [2024-11-06 15:42:03.080454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.334 [2024-11-06 15:42:03.080460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.334 [2024-11-06 15:42:03.080473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.334 qpair failed and we were unable to recover it. 00:29:45.334 [2024-11-06 15:42:03.090558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.334 [2024-11-06 15:42:03.090609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.334 [2024-11-06 15:42:03.090622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.334 [2024-11-06 15:42:03.090629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.334 [2024-11-06 15:42:03.090635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.334 [2024-11-06 15:42:03.090648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.334 qpair failed and we were unable to recover it. 00:29:45.334 [2024-11-06 15:42:03.100625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.334 [2024-11-06 15:42:03.100672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.334 [2024-11-06 15:42:03.100685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.334 [2024-11-06 15:42:03.100692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.334 [2024-11-06 15:42:03.100698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.334 [2024-11-06 15:42:03.100711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.334 qpair failed and we were unable to recover it. 00:29:45.334 [2024-11-06 15:42:03.110602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.334 [2024-11-06 15:42:03.110644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.334 [2024-11-06 15:42:03.110657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.334 [2024-11-06 15:42:03.110664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.334 [2024-11-06 15:42:03.110670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.334 [2024-11-06 15:42:03.110683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.334 qpair failed and we were unable to recover it. 00:29:45.334 [2024-11-06 15:42:03.120618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.334 [2024-11-06 15:42:03.120667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.334 [2024-11-06 15:42:03.120680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.334 [2024-11-06 15:42:03.120687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.334 [2024-11-06 15:42:03.120693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.334 [2024-11-06 15:42:03.120706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.334 qpair failed and we were unable to recover it. 00:29:45.334 [2024-11-06 15:42:03.130629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.334 [2024-11-06 15:42:03.130674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.334 [2024-11-06 15:42:03.130686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.334 [2024-11-06 15:42:03.130693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.335 [2024-11-06 15:42:03.130699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.335 [2024-11-06 15:42:03.130712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.335 qpair failed and we were unable to recover it. 00:29:45.335 [2024-11-06 15:42:03.140743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.335 [2024-11-06 15:42:03.140793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.335 [2024-11-06 15:42:03.140806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.335 [2024-11-06 15:42:03.140813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.335 [2024-11-06 15:42:03.140819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.335 [2024-11-06 15:42:03.140832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.335 qpair failed and we were unable to recover it. 00:29:45.335 [2024-11-06 15:42:03.150708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.335 [2024-11-06 15:42:03.150757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.335 [2024-11-06 15:42:03.150770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.335 [2024-11-06 15:42:03.150777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.335 [2024-11-06 15:42:03.150783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.335 [2024-11-06 15:42:03.150796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.335 qpair failed and we were unable to recover it. 00:29:45.335 [2024-11-06 15:42:03.160717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.335 [2024-11-06 15:42:03.160763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.335 [2024-11-06 15:42:03.160776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.335 [2024-11-06 15:42:03.160786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.335 [2024-11-06 15:42:03.160792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.335 [2024-11-06 15:42:03.160805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.335 qpair failed and we were unable to recover it. 00:29:45.335 [2024-11-06 15:42:03.170771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.335 [2024-11-06 15:42:03.170815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.335 [2024-11-06 15:42:03.170828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.335 [2024-11-06 15:42:03.170835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.335 [2024-11-06 15:42:03.170841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.335 [2024-11-06 15:42:03.170854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.335 qpair failed and we were unable to recover it. 00:29:45.335 [2024-11-06 15:42:03.180700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.335 [2024-11-06 15:42:03.180751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.335 [2024-11-06 15:42:03.180766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.335 [2024-11-06 15:42:03.180773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.335 [2024-11-06 15:42:03.180779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.335 [2024-11-06 15:42:03.180793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.335 qpair failed and we were unable to recover it. 00:29:45.335 [2024-11-06 15:42:03.190685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.335 [2024-11-06 15:42:03.190728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.335 [2024-11-06 15:42:03.190741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.335 [2024-11-06 15:42:03.190753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.335 [2024-11-06 15:42:03.190759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.335 [2024-11-06 15:42:03.190772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.335 qpair failed and we were unable to recover it. 00:29:45.335 [2024-11-06 15:42:03.200836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.335 [2024-11-06 15:42:03.200881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.335 [2024-11-06 15:42:03.200894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.335 [2024-11-06 15:42:03.200901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.335 [2024-11-06 15:42:03.200907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.335 [2024-11-06 15:42:03.200924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.335 qpair failed and we were unable to recover it. 00:29:45.335 [2024-11-06 15:42:03.210754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.335 [2024-11-06 15:42:03.210798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.335 [2024-11-06 15:42:03.210811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.335 [2024-11-06 15:42:03.210818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.335 [2024-11-06 15:42:03.210824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.335 [2024-11-06 15:42:03.210837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.335 qpair failed and we were unable to recover it. 00:29:45.335 [2024-11-06 15:42:03.220978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.335 [2024-11-06 15:42:03.221033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.335 [2024-11-06 15:42:03.221046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.335 [2024-11-06 15:42:03.221053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.335 [2024-11-06 15:42:03.221059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.335 [2024-11-06 15:42:03.221072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.335 qpair failed and we were unable to recover it. 00:29:45.335 [2024-11-06 15:42:03.230838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.335 [2024-11-06 15:42:03.230882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.335 [2024-11-06 15:42:03.230895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.335 [2024-11-06 15:42:03.230902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.335 [2024-11-06 15:42:03.230908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.335 [2024-11-06 15:42:03.230921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.335 qpair failed and we were unable to recover it. 00:29:45.335 [2024-11-06 15:42:03.240918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.335 [2024-11-06 15:42:03.240988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.335 [2024-11-06 15:42:03.241001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.335 [2024-11-06 15:42:03.241008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.335 [2024-11-06 15:42:03.241014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.335 [2024-11-06 15:42:03.241027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.335 qpair failed and we were unable to recover it. 00:29:45.335 [2024-11-06 15:42:03.250851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.335 [2024-11-06 15:42:03.250897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.335 [2024-11-06 15:42:03.250910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.335 [2024-11-06 15:42:03.250917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.335 [2024-11-06 15:42:03.250923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.335 [2024-11-06 15:42:03.250936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.335 qpair failed and we were unable to recover it. 00:29:45.335 [2024-11-06 15:42:03.261030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.335 [2024-11-06 15:42:03.261090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.335 [2024-11-06 15:42:03.261103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.335 [2024-11-06 15:42:03.261110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.335 [2024-11-06 15:42:03.261116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.336 [2024-11-06 15:42:03.261128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.336 qpair failed and we were unable to recover it. 00:29:45.336 [2024-11-06 15:42:03.270900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.336 [2024-11-06 15:42:03.270946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.336 [2024-11-06 15:42:03.270959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.336 [2024-11-06 15:42:03.270966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.336 [2024-11-06 15:42:03.270972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.336 [2024-11-06 15:42:03.270985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.336 qpair failed and we were unable to recover it. 00:29:45.336 [2024-11-06 15:42:03.281069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.336 [2024-11-06 15:42:03.281116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.336 [2024-11-06 15:42:03.281129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.336 [2024-11-06 15:42:03.281135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.336 [2024-11-06 15:42:03.281142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.336 [2024-11-06 15:42:03.281155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.336 qpair failed and we were unable to recover it. 00:29:45.336 [2024-11-06 15:42:03.291046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.336 [2024-11-06 15:42:03.291092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.336 [2024-11-06 15:42:03.291108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.336 [2024-11-06 15:42:03.291115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.336 [2024-11-06 15:42:03.291122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.336 [2024-11-06 15:42:03.291135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.336 qpair failed and we were unable to recover it. 00:29:45.336 [2024-11-06 15:42:03.301166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.336 [2024-11-06 15:42:03.301215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.336 [2024-11-06 15:42:03.301228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.336 [2024-11-06 15:42:03.301235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.336 [2024-11-06 15:42:03.301241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.336 [2024-11-06 15:42:03.301254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.336 qpair failed and we were unable to recover it. 00:29:45.336 [2024-11-06 15:42:03.311128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.336 [2024-11-06 15:42:03.311167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.336 [2024-11-06 15:42:03.311180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.336 [2024-11-06 15:42:03.311187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.336 [2024-11-06 15:42:03.311193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.336 [2024-11-06 15:42:03.311206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.336 qpair failed and we were unable to recover it. 00:29:45.597 [2024-11-06 15:42:03.321148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.597 [2024-11-06 15:42:03.321192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.597 [2024-11-06 15:42:03.321205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.597 [2024-11-06 15:42:03.321212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.597 [2024-11-06 15:42:03.321218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.597 [2024-11-06 15:42:03.321231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.597 qpair failed and we were unable to recover it. 00:29:45.597 [2024-11-06 15:42:03.331242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.597 [2024-11-06 15:42:03.331287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.597 [2024-11-06 15:42:03.331300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.597 [2024-11-06 15:42:03.331307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.597 [2024-11-06 15:42:03.331314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.597 [2024-11-06 15:42:03.331330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.597 qpair failed and we were unable to recover it. 00:29:45.597 [2024-11-06 15:42:03.341273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.597 [2024-11-06 15:42:03.341324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.597 [2024-11-06 15:42:03.341337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.598 [2024-11-06 15:42:03.341343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.598 [2024-11-06 15:42:03.341350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.598 [2024-11-06 15:42:03.341363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.598 qpair failed and we were unable to recover it. 00:29:45.598 [2024-11-06 15:42:03.351296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.598 [2024-11-06 15:42:03.351379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.598 [2024-11-06 15:42:03.351394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.598 [2024-11-06 15:42:03.351401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.598 [2024-11-06 15:42:03.351407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.598 [2024-11-06 15:42:03.351423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.598 qpair failed and we were unable to recover it. 00:29:45.598 [2024-11-06 15:42:03.361279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.598 [2024-11-06 15:42:03.361321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.598 [2024-11-06 15:42:03.361334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.598 [2024-11-06 15:42:03.361341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.598 [2024-11-06 15:42:03.361347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.598 [2024-11-06 15:42:03.361360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.598 qpair failed and we were unable to recover it. 00:29:45.598 [2024-11-06 15:42:03.371328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.598 [2024-11-06 15:42:03.371378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.598 [2024-11-06 15:42:03.371391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.598 [2024-11-06 15:42:03.371397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.598 [2024-11-06 15:42:03.371403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.598 [2024-11-06 15:42:03.371416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.598 qpair failed and we were unable to recover it. 00:29:45.598 [2024-11-06 15:42:03.381374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.598 [2024-11-06 15:42:03.381424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.598 [2024-11-06 15:42:03.381439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.598 [2024-11-06 15:42:03.381445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.598 [2024-11-06 15:42:03.381452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.598 [2024-11-06 15:42:03.381465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.598 qpair failed and we were unable to recover it. 00:29:45.598 [2024-11-06 15:42:03.391359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.598 [2024-11-06 15:42:03.391403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.598 [2024-11-06 15:42:03.391417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.598 [2024-11-06 15:42:03.391423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.598 [2024-11-06 15:42:03.391429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.598 [2024-11-06 15:42:03.391442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.598 qpair failed and we were unable to recover it. 00:29:45.598 [2024-11-06 15:42:03.401397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.598 [2024-11-06 15:42:03.401474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.598 [2024-11-06 15:42:03.401487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.598 [2024-11-06 15:42:03.401494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.598 [2024-11-06 15:42:03.401500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.598 [2024-11-06 15:42:03.401513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.598 qpair failed and we were unable to recover it. 00:29:45.598 [2024-11-06 15:42:03.411401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.598 [2024-11-06 15:42:03.411452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.598 [2024-11-06 15:42:03.411477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.598 [2024-11-06 15:42:03.411486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.598 [2024-11-06 15:42:03.411493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.598 [2024-11-06 15:42:03.411511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.598 qpair failed and we were unable to recover it. 00:29:45.598 [2024-11-06 15:42:03.421454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.598 [2024-11-06 15:42:03.421504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.598 [2024-11-06 15:42:03.421523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.598 [2024-11-06 15:42:03.421530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.598 [2024-11-06 15:42:03.421536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.598 [2024-11-06 15:42:03.421551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.598 qpair failed and we were unable to recover it. 00:29:45.598 [2024-11-06 15:42:03.431468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.598 [2024-11-06 15:42:03.431546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.598 [2024-11-06 15:42:03.431559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.598 [2024-11-06 15:42:03.431566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.598 [2024-11-06 15:42:03.431572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.598 [2024-11-06 15:42:03.431586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.598 qpair failed and we were unable to recover it. 00:29:45.598 [2024-11-06 15:42:03.441481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.598 [2024-11-06 15:42:03.441543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.598 [2024-11-06 15:42:03.441567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.598 [2024-11-06 15:42:03.441576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.598 [2024-11-06 15:42:03.441583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.598 [2024-11-06 15:42:03.441601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.598 qpair failed and we were unable to recover it. 00:29:45.598 [2024-11-06 15:42:03.451530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.598 [2024-11-06 15:42:03.451615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.598 [2024-11-06 15:42:03.451629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.598 [2024-11-06 15:42:03.451636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.598 [2024-11-06 15:42:03.451643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.598 [2024-11-06 15:42:03.451657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.598 qpair failed and we were unable to recover it. 00:29:45.598 [2024-11-06 15:42:03.461600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.598 [2024-11-06 15:42:03.461651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.598 [2024-11-06 15:42:03.461665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.598 [2024-11-06 15:42:03.461672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.598 [2024-11-06 15:42:03.461678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.598 [2024-11-06 15:42:03.461696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.598 qpair failed and we were unable to recover it. 00:29:45.598 [2024-11-06 15:42:03.471589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.598 [2024-11-06 15:42:03.471633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.598 [2024-11-06 15:42:03.471646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.598 [2024-11-06 15:42:03.471653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.598 [2024-11-06 15:42:03.471659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.599 [2024-11-06 15:42:03.471673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.599 qpair failed and we were unable to recover it. 00:29:45.599 [2024-11-06 15:42:03.481473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.599 [2024-11-06 15:42:03.481514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.599 [2024-11-06 15:42:03.481527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.599 [2024-11-06 15:42:03.481534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.599 [2024-11-06 15:42:03.481540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.599 [2024-11-06 15:42:03.481553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.599 qpair failed and we were unable to recover it. 00:29:45.599 [2024-11-06 15:42:03.491570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.599 [2024-11-06 15:42:03.491617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.599 [2024-11-06 15:42:03.491630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.599 [2024-11-06 15:42:03.491637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.599 [2024-11-06 15:42:03.491643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.599 [2024-11-06 15:42:03.491657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.599 qpair failed and we were unable to recover it. 00:29:45.599 [2024-11-06 15:42:03.501707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.599 [2024-11-06 15:42:03.501762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.599 [2024-11-06 15:42:03.501775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.599 [2024-11-06 15:42:03.501782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.599 [2024-11-06 15:42:03.501788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.599 [2024-11-06 15:42:03.501802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.599 qpair failed and we were unable to recover it. 00:29:45.599 [2024-11-06 15:42:03.511566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.599 [2024-11-06 15:42:03.511610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.599 [2024-11-06 15:42:03.511623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.599 [2024-11-06 15:42:03.511630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.599 [2024-11-06 15:42:03.511636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.599 [2024-11-06 15:42:03.511650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.599 qpair failed and we were unable to recover it. 00:29:45.599 [2024-11-06 15:42:03.521730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.599 [2024-11-06 15:42:03.521776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.599 [2024-11-06 15:42:03.521789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.599 [2024-11-06 15:42:03.521796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.599 [2024-11-06 15:42:03.521802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.599 [2024-11-06 15:42:03.521816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.599 qpair failed and we were unable to recover it. 00:29:45.599 [2024-11-06 15:42:03.531645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.599 [2024-11-06 15:42:03.531703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.599 [2024-11-06 15:42:03.531718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.599 [2024-11-06 15:42:03.531725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.599 [2024-11-06 15:42:03.531731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.599 [2024-11-06 15:42:03.531753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.599 qpair failed and we were unable to recover it. 00:29:45.599 [2024-11-06 15:42:03.541808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.599 [2024-11-06 15:42:03.541862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.599 [2024-11-06 15:42:03.541876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.599 [2024-11-06 15:42:03.541883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.599 [2024-11-06 15:42:03.541889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.599 [2024-11-06 15:42:03.541903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.599 qpair failed and we were unable to recover it. 00:29:45.599 [2024-11-06 15:42:03.551812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.599 [2024-11-06 15:42:03.551854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.599 [2024-11-06 15:42:03.551871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.599 [2024-11-06 15:42:03.551878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.599 [2024-11-06 15:42:03.551884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.599 [2024-11-06 15:42:03.551897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.599 qpair failed and we were unable to recover it. 00:29:45.599 [2024-11-06 15:42:03.561747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.599 [2024-11-06 15:42:03.561802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.599 [2024-11-06 15:42:03.561815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.599 [2024-11-06 15:42:03.561822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.599 [2024-11-06 15:42:03.561828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.599 [2024-11-06 15:42:03.561841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.599 qpair failed and we were unable to recover it. 00:29:45.599 [2024-11-06 15:42:03.571836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.599 [2024-11-06 15:42:03.571882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.599 [2024-11-06 15:42:03.571895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.599 [2024-11-06 15:42:03.571901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.599 [2024-11-06 15:42:03.571907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.599 [2024-11-06 15:42:03.571921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.599 qpair failed and we were unable to recover it. 00:29:45.861 [2024-11-06 15:42:03.581935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.861 [2024-11-06 15:42:03.581985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.861 [2024-11-06 15:42:03.581998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.861 [2024-11-06 15:42:03.582005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.861 [2024-11-06 15:42:03.582011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.861 [2024-11-06 15:42:03.582024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.861 qpair failed and we were unable to recover it. 00:29:45.861 [2024-11-06 15:42:03.591921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.861 [2024-11-06 15:42:03.592004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.861 [2024-11-06 15:42:03.592019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.861 [2024-11-06 15:42:03.592026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.861 [2024-11-06 15:42:03.592032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.861 [2024-11-06 15:42:03.592050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.861 qpair failed and we were unable to recover it. 00:29:45.861 [2024-11-06 15:42:03.601926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.861 [2024-11-06 15:42:03.601980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.861 [2024-11-06 15:42:03.601993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.861 [2024-11-06 15:42:03.602000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.861 [2024-11-06 15:42:03.602006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.861 [2024-11-06 15:42:03.602019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.861 qpair failed and we were unable to recover it. 00:29:45.861 [2024-11-06 15:42:03.611955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.861 [2024-11-06 15:42:03.612003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.861 [2024-11-06 15:42:03.612016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.861 [2024-11-06 15:42:03.612023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.861 [2024-11-06 15:42:03.612029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.861 [2024-11-06 15:42:03.612041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.861 qpair failed and we were unable to recover it. 00:29:45.861 [2024-11-06 15:42:03.622046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.861 [2024-11-06 15:42:03.622096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.861 [2024-11-06 15:42:03.622109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.861 [2024-11-06 15:42:03.622116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.861 [2024-11-06 15:42:03.622122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.861 [2024-11-06 15:42:03.622135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.861 qpair failed and we were unable to recover it. 00:29:45.861 [2024-11-06 15:42:03.632013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.861 [2024-11-06 15:42:03.632086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.861 [2024-11-06 15:42:03.632099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.861 [2024-11-06 15:42:03.632106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.861 [2024-11-06 15:42:03.632112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.861 [2024-11-06 15:42:03.632124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.861 qpair failed and we were unable to recover it. 00:29:45.861 [2024-11-06 15:42:03.642055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.861 [2024-11-06 15:42:03.642101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.861 [2024-11-06 15:42:03.642114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.861 [2024-11-06 15:42:03.642121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.861 [2024-11-06 15:42:03.642127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.861 [2024-11-06 15:42:03.642140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.861 qpair failed and we were unable to recover it. 00:29:45.861 [2024-11-06 15:42:03.652082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.861 [2024-11-06 15:42:03.652195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.861 [2024-11-06 15:42:03.652209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.861 [2024-11-06 15:42:03.652216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.861 [2024-11-06 15:42:03.652222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.861 [2024-11-06 15:42:03.652235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.861 qpair failed and we were unable to recover it. 00:29:45.861 [2024-11-06 15:42:03.662167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.861 [2024-11-06 15:42:03.662280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.861 [2024-11-06 15:42:03.662293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.861 [2024-11-06 15:42:03.662300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.861 [2024-11-06 15:42:03.662306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.861 [2024-11-06 15:42:03.662319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.861 qpair failed and we were unable to recover it. 00:29:45.861 [2024-11-06 15:42:03.672114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.861 [2024-11-06 15:42:03.672185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.861 [2024-11-06 15:42:03.672198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.861 [2024-11-06 15:42:03.672205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.861 [2024-11-06 15:42:03.672211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.861 [2024-11-06 15:42:03.672224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.861 qpair failed and we were unable to recover it. 00:29:45.861 [2024-11-06 15:42:03.682026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.862 [2024-11-06 15:42:03.682068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.862 [2024-11-06 15:42:03.682084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.862 [2024-11-06 15:42:03.682091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.862 [2024-11-06 15:42:03.682097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.862 [2024-11-06 15:42:03.682110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.862 qpair failed and we were unable to recover it. 00:29:45.862 [2024-11-06 15:42:03.692082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.862 [2024-11-06 15:42:03.692128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.862 [2024-11-06 15:42:03.692141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.862 [2024-11-06 15:42:03.692148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.862 [2024-11-06 15:42:03.692154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.862 [2024-11-06 15:42:03.692167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.862 qpair failed and we were unable to recover it. 00:29:45.862 [2024-11-06 15:42:03.702103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.862 [2024-11-06 15:42:03.702150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.862 [2024-11-06 15:42:03.702163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.862 [2024-11-06 15:42:03.702170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.862 [2024-11-06 15:42:03.702176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.862 [2024-11-06 15:42:03.702189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.862 qpair failed and we were unable to recover it. 00:29:45.862 [2024-11-06 15:42:03.712151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.862 [2024-11-06 15:42:03.712209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.862 [2024-11-06 15:42:03.712223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.862 [2024-11-06 15:42:03.712229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.862 [2024-11-06 15:42:03.712235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.862 [2024-11-06 15:42:03.712248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.862 qpair failed and we were unable to recover it. 00:29:45.862 [2024-11-06 15:42:03.722264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.862 [2024-11-06 15:42:03.722304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.862 [2024-11-06 15:42:03.722317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.862 [2024-11-06 15:42:03.722323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.862 [2024-11-06 15:42:03.722329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.862 [2024-11-06 15:42:03.722345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.862 qpair failed and we were unable to recover it. 00:29:45.862 [2024-11-06 15:42:03.732269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.862 [2024-11-06 15:42:03.732316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.862 [2024-11-06 15:42:03.732329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.862 [2024-11-06 15:42:03.732336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.862 [2024-11-06 15:42:03.732342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.862 [2024-11-06 15:42:03.732355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.862 qpair failed and we were unable to recover it. 00:29:45.862 [2024-11-06 15:42:03.742347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.862 [2024-11-06 15:42:03.742398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.862 [2024-11-06 15:42:03.742411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.862 [2024-11-06 15:42:03.742419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.862 [2024-11-06 15:42:03.742425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.862 [2024-11-06 15:42:03.742438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.862 qpair failed and we were unable to recover it. 00:29:45.862 [2024-11-06 15:42:03.752366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.862 [2024-11-06 15:42:03.752453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.862 [2024-11-06 15:42:03.752466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.862 [2024-11-06 15:42:03.752472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.862 [2024-11-06 15:42:03.752478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.862 [2024-11-06 15:42:03.752491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.862 qpair failed and we were unable to recover it. 00:29:45.862 [2024-11-06 15:42:03.762388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.862 [2024-11-06 15:42:03.762434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.862 [2024-11-06 15:42:03.762448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.862 [2024-11-06 15:42:03.762455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.862 [2024-11-06 15:42:03.762461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.862 [2024-11-06 15:42:03.762474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.862 qpair failed and we were unable to recover it. 00:29:45.862 [2024-11-06 15:42:03.772411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.862 [2024-11-06 15:42:03.772457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.862 [2024-11-06 15:42:03.772470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.862 [2024-11-06 15:42:03.772477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.862 [2024-11-06 15:42:03.772483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.862 [2024-11-06 15:42:03.772497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.862 qpair failed and we were unable to recover it. 00:29:45.862 [2024-11-06 15:42:03.782417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.862 [2024-11-06 15:42:03.782468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.862 [2024-11-06 15:42:03.782481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.862 [2024-11-06 15:42:03.782488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.862 [2024-11-06 15:42:03.782494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.862 [2024-11-06 15:42:03.782507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.862 qpair failed and we were unable to recover it. 00:29:45.862 [2024-11-06 15:42:03.792450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.862 [2024-11-06 15:42:03.792496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.862 [2024-11-06 15:42:03.792509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.862 [2024-11-06 15:42:03.792516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.862 [2024-11-06 15:42:03.792522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.862 [2024-11-06 15:42:03.792535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.862 qpair failed and we were unable to recover it. 00:29:45.862 [2024-11-06 15:42:03.802483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.862 [2024-11-06 15:42:03.802527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.862 [2024-11-06 15:42:03.802543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.862 [2024-11-06 15:42:03.802549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.862 [2024-11-06 15:42:03.802556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.862 [2024-11-06 15:42:03.802569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.862 qpair failed and we were unable to recover it. 00:29:45.862 [2024-11-06 15:42:03.812538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.862 [2024-11-06 15:42:03.812589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.862 [2024-11-06 15:42:03.812605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.863 [2024-11-06 15:42:03.812612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.863 [2024-11-06 15:42:03.812619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.863 [2024-11-06 15:42:03.812632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.863 qpair failed and we were unable to recover it. 00:29:45.863 [2024-11-06 15:42:03.822560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.863 [2024-11-06 15:42:03.822612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.863 [2024-11-06 15:42:03.822624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.863 [2024-11-06 15:42:03.822631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.863 [2024-11-06 15:42:03.822637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.863 [2024-11-06 15:42:03.822650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.863 qpair failed and we were unable to recover it. 00:29:45.863 [2024-11-06 15:42:03.832583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.863 [2024-11-06 15:42:03.832623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.863 [2024-11-06 15:42:03.832636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.863 [2024-11-06 15:42:03.832643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.863 [2024-11-06 15:42:03.832650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:45.863 [2024-11-06 15:42:03.832663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.863 qpair failed and we were unable to recover it. 00:29:46.124 [2024-11-06 15:42:03.842616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.124 [2024-11-06 15:42:03.842661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.124 [2024-11-06 15:42:03.842673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.124 [2024-11-06 15:42:03.842680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.124 [2024-11-06 15:42:03.842687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.124 [2024-11-06 15:42:03.842700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.124 qpair failed and we were unable to recover it. 00:29:46.124 [2024-11-06 15:42:03.852637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.124 [2024-11-06 15:42:03.852681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.124 [2024-11-06 15:42:03.852694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.124 [2024-11-06 15:42:03.852701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.124 [2024-11-06 15:42:03.852711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.125 [2024-11-06 15:42:03.852724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.125 qpair failed and we were unable to recover it. 00:29:46.125 [2024-11-06 15:42:03.862702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.125 [2024-11-06 15:42:03.862780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.125 [2024-11-06 15:42:03.862793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.125 [2024-11-06 15:42:03.862800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.125 [2024-11-06 15:42:03.862806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.125 [2024-11-06 15:42:03.862819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.125 qpair failed and we were unable to recover it. 00:29:46.125 [2024-11-06 15:42:03.872681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.125 [2024-11-06 15:42:03.872728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.125 [2024-11-06 15:42:03.872741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.125 [2024-11-06 15:42:03.872754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.125 [2024-11-06 15:42:03.872761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.125 [2024-11-06 15:42:03.872774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.125 qpair failed and we were unable to recover it. 00:29:46.125 [2024-11-06 15:42:03.882677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.125 [2024-11-06 15:42:03.882724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.125 [2024-11-06 15:42:03.882737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.125 [2024-11-06 15:42:03.882744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.125 [2024-11-06 15:42:03.882755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.125 [2024-11-06 15:42:03.882768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.125 qpair failed and we were unable to recover it. 00:29:46.125 [2024-11-06 15:42:03.892733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.125 [2024-11-06 15:42:03.892782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.125 [2024-11-06 15:42:03.892795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.125 [2024-11-06 15:42:03.892802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.125 [2024-11-06 15:42:03.892808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.125 [2024-11-06 15:42:03.892821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.125 qpair failed and we were unable to recover it. 00:29:46.125 [2024-11-06 15:42:03.902772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.125 [2024-11-06 15:42:03.902820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.125 [2024-11-06 15:42:03.902833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.125 [2024-11-06 15:42:03.902840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.125 [2024-11-06 15:42:03.902846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.125 [2024-11-06 15:42:03.902859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.125 qpair failed and we were unable to recover it. 00:29:46.125 [2024-11-06 15:42:03.912650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.125 [2024-11-06 15:42:03.912697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.125 [2024-11-06 15:42:03.912710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.125 [2024-11-06 15:42:03.912716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.125 [2024-11-06 15:42:03.912723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.125 [2024-11-06 15:42:03.912735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.125 qpair failed and we were unable to recover it. 00:29:46.125 [2024-11-06 15:42:03.922677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.125 [2024-11-06 15:42:03.922717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.125 [2024-11-06 15:42:03.922730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.125 [2024-11-06 15:42:03.922737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.125 [2024-11-06 15:42:03.922743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.125 [2024-11-06 15:42:03.922760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.125 qpair failed and we were unable to recover it. 00:29:46.125 [2024-11-06 15:42:03.932710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.125 [2024-11-06 15:42:03.932758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.125 [2024-11-06 15:42:03.932772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.125 [2024-11-06 15:42:03.932778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.125 [2024-11-06 15:42:03.932784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.125 [2024-11-06 15:42:03.932798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.125 qpair failed and we were unable to recover it. 00:29:46.125 [2024-11-06 15:42:03.942748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.125 [2024-11-06 15:42:03.942791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.125 [2024-11-06 15:42:03.942807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.125 [2024-11-06 15:42:03.942814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.125 [2024-11-06 15:42:03.942820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.125 [2024-11-06 15:42:03.942833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.125 qpair failed and we were unable to recover it. 00:29:46.125 [2024-11-06 15:42:03.952879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.125 [2024-11-06 15:42:03.952921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.125 [2024-11-06 15:42:03.952934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.125 [2024-11-06 15:42:03.952941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.125 [2024-11-06 15:42:03.952947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.125 [2024-11-06 15:42:03.952960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.125 qpair failed and we were unable to recover it. 00:29:46.125 [2024-11-06 15:42:03.962872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.125 [2024-11-06 15:42:03.962916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.125 [2024-11-06 15:42:03.962928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.125 [2024-11-06 15:42:03.962935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.125 [2024-11-06 15:42:03.962942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.125 [2024-11-06 15:42:03.962955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.125 qpair failed and we were unable to recover it. 00:29:46.125 [2024-11-06 15:42:03.972942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.125 [2024-11-06 15:42:03.972993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.125 [2024-11-06 15:42:03.973005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.125 [2024-11-06 15:42:03.973012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.125 [2024-11-06 15:42:03.973018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.126 [2024-11-06 15:42:03.973031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.126 qpair failed and we were unable to recover it. 00:29:46.126 [2024-11-06 15:42:03.982992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.126 [2024-11-06 15:42:03.983040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.126 [2024-11-06 15:42:03.983053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.126 [2024-11-06 15:42:03.983060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.126 [2024-11-06 15:42:03.983069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.126 [2024-11-06 15:42:03.983082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.126 qpair failed and we were unable to recover it. 00:29:46.126 [2024-11-06 15:42:03.993035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.126 [2024-11-06 15:42:03.993123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.126 [2024-11-06 15:42:03.993137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.126 [2024-11-06 15:42:03.993144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.126 [2024-11-06 15:42:03.993150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.126 [2024-11-06 15:42:03.993163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.126 qpair failed and we were unable to recover it. 00:29:46.126 [2024-11-06 15:42:04.003028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.126 [2024-11-06 15:42:04.003073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.126 [2024-11-06 15:42:04.003086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.126 [2024-11-06 15:42:04.003093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.126 [2024-11-06 15:42:04.003100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.126 [2024-11-06 15:42:04.003113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.126 qpair failed and we were unable to recover it. 00:29:46.126 [2024-11-06 15:42:04.013043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.126 [2024-11-06 15:42:04.013092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.126 [2024-11-06 15:42:04.013109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.126 [2024-11-06 15:42:04.013117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.126 [2024-11-06 15:42:04.013123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.126 [2024-11-06 15:42:04.013138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.126 qpair failed and we were unable to recover it. 00:29:46.126 [2024-11-06 15:42:04.023108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.126 [2024-11-06 15:42:04.023154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.126 [2024-11-06 15:42:04.023168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.126 [2024-11-06 15:42:04.023175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.126 [2024-11-06 15:42:04.023181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.126 [2024-11-06 15:42:04.023194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.126 qpair failed and we were unable to recover it. 00:29:46.126 [2024-11-06 15:42:04.033132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.126 [2024-11-06 15:42:04.033174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.126 [2024-11-06 15:42:04.033187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.126 [2024-11-06 15:42:04.033194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.126 [2024-11-06 15:42:04.033200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.126 [2024-11-06 15:42:04.033213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.126 qpair failed and we were unable to recover it. 00:29:46.126 [2024-11-06 15:42:04.043168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.126 [2024-11-06 15:42:04.043218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.126 [2024-11-06 15:42:04.043231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.126 [2024-11-06 15:42:04.043238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.126 [2024-11-06 15:42:04.043244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.126 [2024-11-06 15:42:04.043257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.126 qpair failed and we were unable to recover it. 00:29:46.126 [2024-11-06 15:42:04.053189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.126 [2024-11-06 15:42:04.053239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.126 [2024-11-06 15:42:04.053252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.126 [2024-11-06 15:42:04.053259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.126 [2024-11-06 15:42:04.053265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.126 [2024-11-06 15:42:04.053278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.126 qpair failed and we were unable to recover it. 00:29:46.126 [2024-11-06 15:42:04.063216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.126 [2024-11-06 15:42:04.063261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.126 [2024-11-06 15:42:04.063274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.126 [2024-11-06 15:42:04.063281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.126 [2024-11-06 15:42:04.063287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.126 [2024-11-06 15:42:04.063300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.126 qpair failed and we were unable to recover it. 00:29:46.126 [2024-11-06 15:42:04.073101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.126 [2024-11-06 15:42:04.073146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.126 [2024-11-06 15:42:04.073162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.126 [2024-11-06 15:42:04.073169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.126 [2024-11-06 15:42:04.073176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.126 [2024-11-06 15:42:04.073189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.126 qpair failed and we were unable to recover it. 00:29:46.126 [2024-11-06 15:42:04.083235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.126 [2024-11-06 15:42:04.083279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.126 [2024-11-06 15:42:04.083292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.126 [2024-11-06 15:42:04.083299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.126 [2024-11-06 15:42:04.083305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.126 [2024-11-06 15:42:04.083318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.126 qpair failed and we were unable to recover it. 00:29:46.126 [2024-11-06 15:42:04.093286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.126 [2024-11-06 15:42:04.093335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.126 [2024-11-06 15:42:04.093348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.126 [2024-11-06 15:42:04.093355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.126 [2024-11-06 15:42:04.093361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.126 [2024-11-06 15:42:04.093375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.126 qpair failed and we were unable to recover it. 00:29:46.126 [2024-11-06 15:42:04.103330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.127 [2024-11-06 15:42:04.103377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.127 [2024-11-06 15:42:04.103390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.127 [2024-11-06 15:42:04.103397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.127 [2024-11-06 15:42:04.103403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.127 [2024-11-06 15:42:04.103416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.127 qpair failed and we were unable to recover it. 00:29:46.388 [2024-11-06 15:42:04.113313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.388 [2024-11-06 15:42:04.113356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.388 [2024-11-06 15:42:04.113369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.388 [2024-11-06 15:42:04.113376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.388 [2024-11-06 15:42:04.113386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.388 [2024-11-06 15:42:04.113399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.388 qpair failed and we were unable to recover it. 00:29:46.388 [2024-11-06 15:42:04.123369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.388 [2024-11-06 15:42:04.123413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.388 [2024-11-06 15:42:04.123426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.388 [2024-11-06 15:42:04.123433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.388 [2024-11-06 15:42:04.123439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.388 [2024-11-06 15:42:04.123452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.388 qpair failed and we were unable to recover it. 00:29:46.388 [2024-11-06 15:42:04.133372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.388 [2024-11-06 15:42:04.133420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.388 [2024-11-06 15:42:04.133433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.388 [2024-11-06 15:42:04.133440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.388 [2024-11-06 15:42:04.133446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.388 [2024-11-06 15:42:04.133459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.388 qpair failed and we were unable to recover it. 00:29:46.388 [2024-11-06 15:42:04.143307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.388 [2024-11-06 15:42:04.143355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.388 [2024-11-06 15:42:04.143369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.388 [2024-11-06 15:42:04.143375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.388 [2024-11-06 15:42:04.143381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.388 [2024-11-06 15:42:04.143394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.388 qpair failed and we were unable to recover it. 00:29:46.388 [2024-11-06 15:42:04.153453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.388 [2024-11-06 15:42:04.153540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.388 [2024-11-06 15:42:04.153554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.388 [2024-11-06 15:42:04.153562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.389 [2024-11-06 15:42:04.153568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.389 [2024-11-06 15:42:04.153585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.389 qpair failed and we were unable to recover it. 00:29:46.389 [2024-11-06 15:42:04.163476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.389 [2024-11-06 15:42:04.163520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.389 [2024-11-06 15:42:04.163535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.389 [2024-11-06 15:42:04.163542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.389 [2024-11-06 15:42:04.163548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.389 [2024-11-06 15:42:04.163562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.389 qpair failed and we were unable to recover it. 00:29:46.389 [2024-11-06 15:42:04.173509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.389 [2024-11-06 15:42:04.173560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.389 [2024-11-06 15:42:04.173585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.389 [2024-11-06 15:42:04.173594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.389 [2024-11-06 15:42:04.173601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.389 [2024-11-06 15:42:04.173620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.389 qpair failed and we were unable to recover it. 00:29:46.389 [2024-11-06 15:42:04.183564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.389 [2024-11-06 15:42:04.183660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.389 [2024-11-06 15:42:04.183675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.389 [2024-11-06 15:42:04.183682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.389 [2024-11-06 15:42:04.183689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.389 [2024-11-06 15:42:04.183703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.389 qpair failed and we were unable to recover it. 00:29:46.389 [2024-11-06 15:42:04.193551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.389 [2024-11-06 15:42:04.193597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.389 [2024-11-06 15:42:04.193611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.389 [2024-11-06 15:42:04.193618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.389 [2024-11-06 15:42:04.193624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.389 [2024-11-06 15:42:04.193638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.389 qpair failed and we were unable to recover it. 00:29:46.389 [2024-11-06 15:42:04.203587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.389 [2024-11-06 15:42:04.203627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.389 [2024-11-06 15:42:04.203645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.389 [2024-11-06 15:42:04.203652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.389 [2024-11-06 15:42:04.203658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.389 [2024-11-06 15:42:04.203672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.389 qpair failed and we were unable to recover it. 00:29:46.389 [2024-11-06 15:42:04.213618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.389 [2024-11-06 15:42:04.213663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.389 [2024-11-06 15:42:04.213678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.389 [2024-11-06 15:42:04.213685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.389 [2024-11-06 15:42:04.213691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.389 [2024-11-06 15:42:04.213705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.389 qpair failed and we were unable to recover it. 00:29:46.389 [2024-11-06 15:42:04.223658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.389 [2024-11-06 15:42:04.223707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.389 [2024-11-06 15:42:04.223720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.389 [2024-11-06 15:42:04.223727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.389 [2024-11-06 15:42:04.223733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.389 [2024-11-06 15:42:04.223749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.389 qpair failed and we were unable to recover it. 00:29:46.389 [2024-11-06 15:42:04.233535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.389 [2024-11-06 15:42:04.233578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.389 [2024-11-06 15:42:04.233590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.389 [2024-11-06 15:42:04.233597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.389 [2024-11-06 15:42:04.233604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.389 [2024-11-06 15:42:04.233617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.389 qpair failed and we were unable to recover it. 00:29:46.389 [2024-11-06 15:42:04.243643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.389 [2024-11-06 15:42:04.243686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.389 [2024-11-06 15:42:04.243699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.389 [2024-11-06 15:42:04.243705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.389 [2024-11-06 15:42:04.243715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.389 [2024-11-06 15:42:04.243728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.389 qpair failed and we were unable to recover it. 00:29:46.389 [2024-11-06 15:42:04.253594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.389 [2024-11-06 15:42:04.253643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.389 [2024-11-06 15:42:04.253656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.389 [2024-11-06 15:42:04.253663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.389 [2024-11-06 15:42:04.253669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.389 [2024-11-06 15:42:04.253682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.389 qpair failed and we were unable to recover it. 00:29:46.389 [2024-11-06 15:42:04.263723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.389 [2024-11-06 15:42:04.263774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.389 [2024-11-06 15:42:04.263788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.389 [2024-11-06 15:42:04.263795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.389 [2024-11-06 15:42:04.263801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.389 [2024-11-06 15:42:04.263814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.389 qpair failed and we were unable to recover it. 00:29:46.389 [2024-11-06 15:42:04.273775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.389 [2024-11-06 15:42:04.273822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.389 [2024-11-06 15:42:04.273835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.389 [2024-11-06 15:42:04.273842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.389 [2024-11-06 15:42:04.273848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.390 [2024-11-06 15:42:04.273861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.390 qpair failed and we were unable to recover it. 00:29:46.390 [2024-11-06 15:42:04.283875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.390 [2024-11-06 15:42:04.283940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.390 [2024-11-06 15:42:04.283953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.390 [2024-11-06 15:42:04.283960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.390 [2024-11-06 15:42:04.283966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.390 [2024-11-06 15:42:04.283979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.390 qpair failed and we were unable to recover it. 00:29:46.390 [2024-11-06 15:42:04.293793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.390 [2024-11-06 15:42:04.293837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.390 [2024-11-06 15:42:04.293850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.390 [2024-11-06 15:42:04.293857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.390 [2024-11-06 15:42:04.293863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.390 [2024-11-06 15:42:04.293876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.390 qpair failed and we were unable to recover it. 00:29:46.390 [2024-11-06 15:42:04.303857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.390 [2024-11-06 15:42:04.303918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.390 [2024-11-06 15:42:04.303931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.390 [2024-11-06 15:42:04.303938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.390 [2024-11-06 15:42:04.303944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.390 [2024-11-06 15:42:04.303957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.390 qpair failed and we were unable to recover it. 00:29:46.390 [2024-11-06 15:42:04.313884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.390 [2024-11-06 15:42:04.313929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.390 [2024-11-06 15:42:04.313942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.390 [2024-11-06 15:42:04.313949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.390 [2024-11-06 15:42:04.313955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.390 [2024-11-06 15:42:04.313969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.390 qpair failed and we were unable to recover it. 00:29:46.390 [2024-11-06 15:42:04.323863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.390 [2024-11-06 15:42:04.323934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.390 [2024-11-06 15:42:04.323947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.390 [2024-11-06 15:42:04.323953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.390 [2024-11-06 15:42:04.323960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.390 [2024-11-06 15:42:04.323973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.390 qpair failed and we were unable to recover it. 00:29:46.390 [2024-11-06 15:42:04.333937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.390 [2024-11-06 15:42:04.333981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.390 [2024-11-06 15:42:04.333997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.390 [2024-11-06 15:42:04.334004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.390 [2024-11-06 15:42:04.334010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.390 [2024-11-06 15:42:04.334023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.390 qpair failed and we were unable to recover it. 00:29:46.390 [2024-11-06 15:42:04.343956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.390 [2024-11-06 15:42:04.344006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.390 [2024-11-06 15:42:04.344019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.390 [2024-11-06 15:42:04.344026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.390 [2024-11-06 15:42:04.344032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.390 [2024-11-06 15:42:04.344045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.390 qpair failed and we were unable to recover it. 00:29:46.390 [2024-11-06 15:42:04.353984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.390 [2024-11-06 15:42:04.354026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.390 [2024-11-06 15:42:04.354040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.390 [2024-11-06 15:42:04.354047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.390 [2024-11-06 15:42:04.354053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.390 [2024-11-06 15:42:04.354066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.390 qpair failed and we were unable to recover it. 00:29:46.390 [2024-11-06 15:42:04.364000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.390 [2024-11-06 15:42:04.364043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.390 [2024-11-06 15:42:04.364055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.390 [2024-11-06 15:42:04.364062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.390 [2024-11-06 15:42:04.364068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.390 [2024-11-06 15:42:04.364081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.390 qpair failed and we were unable to recover it. 00:29:46.650 [2024-11-06 15:42:04.374010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.650 [2024-11-06 15:42:04.374053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.650 [2024-11-06 15:42:04.374067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.650 [2024-11-06 15:42:04.374073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.650 [2024-11-06 15:42:04.374084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.650 [2024-11-06 15:42:04.374098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-11-06 15:42:04.384053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.650 [2024-11-06 15:42:04.384103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.650 [2024-11-06 15:42:04.384116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.650 [2024-11-06 15:42:04.384123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.650 [2024-11-06 15:42:04.384130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.650 [2024-11-06 15:42:04.384143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-11-06 15:42:04.394088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.650 [2024-11-06 15:42:04.394134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.650 [2024-11-06 15:42:04.394147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.650 [2024-11-06 15:42:04.394154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.650 [2024-11-06 15:42:04.394160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.650 [2024-11-06 15:42:04.394174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-11-06 15:42:04.404078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.650 [2024-11-06 15:42:04.404122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.650 [2024-11-06 15:42:04.404135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.650 [2024-11-06 15:42:04.404142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.650 [2024-11-06 15:42:04.404148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.650 [2024-11-06 15:42:04.404161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.650 [2024-11-06 15:42:04.414111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.650 [2024-11-06 15:42:04.414155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.650 [2024-11-06 15:42:04.414168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.650 [2024-11-06 15:42:04.414175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.650 [2024-11-06 15:42:04.414181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.650 [2024-11-06 15:42:04.414194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.650 qpair failed and we were unable to recover it. 00:29:46.651 [2024-11-06 15:42:04.424165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.651 [2024-11-06 15:42:04.424216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.651 [2024-11-06 15:42:04.424230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.651 [2024-11-06 15:42:04.424237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.651 [2024-11-06 15:42:04.424243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.651 [2024-11-06 15:42:04.424257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-11-06 15:42:04.434199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.651 [2024-11-06 15:42:04.434246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.651 [2024-11-06 15:42:04.434258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.651 [2024-11-06 15:42:04.434265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.651 [2024-11-06 15:42:04.434271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.651 [2024-11-06 15:42:04.434284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-11-06 15:42:04.444210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.651 [2024-11-06 15:42:04.444253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.651 [2024-11-06 15:42:04.444266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.651 [2024-11-06 15:42:04.444273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.651 [2024-11-06 15:42:04.444279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.651 [2024-11-06 15:42:04.444292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-11-06 15:42:04.454219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.651 [2024-11-06 15:42:04.454267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.651 [2024-11-06 15:42:04.454279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.651 [2024-11-06 15:42:04.454286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.651 [2024-11-06 15:42:04.454292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.651 [2024-11-06 15:42:04.454305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-11-06 15:42:04.464279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.651 [2024-11-06 15:42:04.464368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.651 [2024-11-06 15:42:04.464384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.651 [2024-11-06 15:42:04.464391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.651 [2024-11-06 15:42:04.464398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.651 [2024-11-06 15:42:04.464411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-11-06 15:42:04.474164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.651 [2024-11-06 15:42:04.474205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.651 [2024-11-06 15:42:04.474218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.651 [2024-11-06 15:42:04.474225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.651 [2024-11-06 15:42:04.474231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.651 [2024-11-06 15:42:04.474244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-11-06 15:42:04.484319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.651 [2024-11-06 15:42:04.484362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.651 [2024-11-06 15:42:04.484375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.651 [2024-11-06 15:42:04.484382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.651 [2024-11-06 15:42:04.484388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.651 [2024-11-06 15:42:04.484401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-11-06 15:42:04.494219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.651 [2024-11-06 15:42:04.494261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.651 [2024-11-06 15:42:04.494274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.651 [2024-11-06 15:42:04.494280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.651 [2024-11-06 15:42:04.494287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.651 [2024-11-06 15:42:04.494299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-11-06 15:42:04.504252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.651 [2024-11-06 15:42:04.504300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.651 [2024-11-06 15:42:04.504313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.651 [2024-11-06 15:42:04.504320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.651 [2024-11-06 15:42:04.504330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.651 [2024-11-06 15:42:04.504343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-11-06 15:42:04.514394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.651 [2024-11-06 15:42:04.514440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.651 [2024-11-06 15:42:04.514453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.651 [2024-11-06 15:42:04.514459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.651 [2024-11-06 15:42:04.514466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.651 [2024-11-06 15:42:04.514478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-11-06 15:42:04.524430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.651 [2024-11-06 15:42:04.524476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.651 [2024-11-06 15:42:04.524489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.651 [2024-11-06 15:42:04.524496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.651 [2024-11-06 15:42:04.524502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.651 [2024-11-06 15:42:04.524515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-11-06 15:42:04.534435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.651 [2024-11-06 15:42:04.534477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.651 [2024-11-06 15:42:04.534490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.651 [2024-11-06 15:42:04.534497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.651 [2024-11-06 15:42:04.534503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.651 [2024-11-06 15:42:04.534516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-11-06 15:42:04.544492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.651 [2024-11-06 15:42:04.544538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.651 [2024-11-06 15:42:04.544551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.651 [2024-11-06 15:42:04.544557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.651 [2024-11-06 15:42:04.544563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.651 [2024-11-06 15:42:04.544576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-11-06 15:42:04.554501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.651 [2024-11-06 15:42:04.554546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.651 [2024-11-06 15:42:04.554559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.651 [2024-11-06 15:42:04.554566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.651 [2024-11-06 15:42:04.554573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.651 [2024-11-06 15:42:04.554586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-11-06 15:42:04.564544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.651 [2024-11-06 15:42:04.564584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.651 [2024-11-06 15:42:04.564597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.651 [2024-11-06 15:42:04.564604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.651 [2024-11-06 15:42:04.564610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.651 [2024-11-06 15:42:04.564623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-11-06 15:42:04.574582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.651 [2024-11-06 15:42:04.574629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.651 [2024-11-06 15:42:04.574642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.651 [2024-11-06 15:42:04.574649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.651 [2024-11-06 15:42:04.574655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.651 [2024-11-06 15:42:04.574668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-11-06 15:42:04.584588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.651 [2024-11-06 15:42:04.584639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.651 [2024-11-06 15:42:04.584652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.651 [2024-11-06 15:42:04.584659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.651 [2024-11-06 15:42:04.584665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.651 [2024-11-06 15:42:04.584678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-11-06 15:42:04.594610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.651 [2024-11-06 15:42:04.594652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.651 [2024-11-06 15:42:04.594669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.651 [2024-11-06 15:42:04.594675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.651 [2024-11-06 15:42:04.594682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.651 [2024-11-06 15:42:04.594695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-11-06 15:42:04.604634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.651 [2024-11-06 15:42:04.604676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.651 [2024-11-06 15:42:04.604689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.651 [2024-11-06 15:42:04.604695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.651 [2024-11-06 15:42:04.604701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.651 [2024-11-06 15:42:04.604714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-11-06 15:42:04.614631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.651 [2024-11-06 15:42:04.614679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.651 [2024-11-06 15:42:04.614692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.651 [2024-11-06 15:42:04.614699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.651 [2024-11-06 15:42:04.614706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.651 [2024-11-06 15:42:04.614718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.651 [2024-11-06 15:42:04.624707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.651 [2024-11-06 15:42:04.624760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.651 [2024-11-06 15:42:04.624773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.651 [2024-11-06 15:42:04.624780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.651 [2024-11-06 15:42:04.624786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.651 [2024-11-06 15:42:04.624799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.651 qpair failed and we were unable to recover it. 00:29:46.912 [2024-11-06 15:42:04.634718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.912 [2024-11-06 15:42:04.634764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.912 [2024-11-06 15:42:04.634779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.912 [2024-11-06 15:42:04.634785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.912 [2024-11-06 15:42:04.634795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.912 [2024-11-06 15:42:04.634809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.912 qpair failed and we were unable to recover it. 00:29:46.912 [2024-11-06 15:42:04.644743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.912 [2024-11-06 15:42:04.644833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.912 [2024-11-06 15:42:04.644846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.913 [2024-11-06 15:42:04.644853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.913 [2024-11-06 15:42:04.644859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.913 [2024-11-06 15:42:04.644872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.913 qpair failed and we were unable to recover it. 00:29:46.913 [2024-11-06 15:42:04.654773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.913 [2024-11-06 15:42:04.654818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.913 [2024-11-06 15:42:04.654831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.913 [2024-11-06 15:42:04.654837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.913 [2024-11-06 15:42:04.654844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.913 [2024-11-06 15:42:04.654856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.913 qpair failed and we were unable to recover it. 00:29:46.913 [2024-11-06 15:42:04.664876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.913 [2024-11-06 15:42:04.664924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.913 [2024-11-06 15:42:04.664937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.913 [2024-11-06 15:42:04.664944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.913 [2024-11-06 15:42:04.664950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.913 [2024-11-06 15:42:04.664963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.913 qpair failed and we were unable to recover it. 00:29:46.913 [2024-11-06 15:42:04.674838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.913 [2024-11-06 15:42:04.674881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.913 [2024-11-06 15:42:04.674894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.913 [2024-11-06 15:42:04.674901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.913 [2024-11-06 15:42:04.674907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.913 [2024-11-06 15:42:04.674920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.913 qpair failed and we were unable to recover it. 00:29:46.913 [2024-11-06 15:42:04.684838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.913 [2024-11-06 15:42:04.684898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.913 [2024-11-06 15:42:04.684911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.913 [2024-11-06 15:42:04.684918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.913 [2024-11-06 15:42:04.684924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.913 [2024-11-06 15:42:04.684937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.913 qpair failed and we were unable to recover it. 00:29:46.913 [2024-11-06 15:42:04.694870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.913 [2024-11-06 15:42:04.694919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.913 [2024-11-06 15:42:04.694932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.913 [2024-11-06 15:42:04.694939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.913 [2024-11-06 15:42:04.694945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.913 [2024-11-06 15:42:04.694958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.913 qpair failed and we were unable to recover it. 00:29:46.913 [2024-11-06 15:42:04.704929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.913 [2024-11-06 15:42:04.704975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.913 [2024-11-06 15:42:04.704988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.913 [2024-11-06 15:42:04.704995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.913 [2024-11-06 15:42:04.705001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.913 [2024-11-06 15:42:04.705013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.913 qpair failed and we were unable to recover it. 00:29:46.913 [2024-11-06 15:42:04.714933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.913 [2024-11-06 15:42:04.714985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.913 [2024-11-06 15:42:04.714998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.913 [2024-11-06 15:42:04.715005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.913 [2024-11-06 15:42:04.715011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.913 [2024-11-06 15:42:04.715024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.913 qpair failed and we were unable to recover it. 00:29:46.913 [2024-11-06 15:42:04.724961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.913 [2024-11-06 15:42:04.725015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.913 [2024-11-06 15:42:04.725031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.913 [2024-11-06 15:42:04.725038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.913 [2024-11-06 15:42:04.725045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.913 [2024-11-06 15:42:04.725059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.913 qpair failed and we were unable to recover it. 00:29:46.913 [2024-11-06 15:42:04.734998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.913 [2024-11-06 15:42:04.735042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.913 [2024-11-06 15:42:04.735055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.913 [2024-11-06 15:42:04.735062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.913 [2024-11-06 15:42:04.735068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.913 [2024-11-06 15:42:04.735080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.913 qpair failed and we were unable to recover it. 00:29:46.913 [2024-11-06 15:42:04.744883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.913 [2024-11-06 15:42:04.744930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.913 [2024-11-06 15:42:04.744942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.913 [2024-11-06 15:42:04.744949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.913 [2024-11-06 15:42:04.744955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.913 [2024-11-06 15:42:04.744969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.913 qpair failed and we were unable to recover it. 00:29:46.913 [2024-11-06 15:42:04.755051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.913 [2024-11-06 15:42:04.755096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.913 [2024-11-06 15:42:04.755109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.913 [2024-11-06 15:42:04.755116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.913 [2024-11-06 15:42:04.755122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.913 [2024-11-06 15:42:04.755135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.913 qpair failed and we were unable to recover it. 00:29:46.913 [2024-11-06 15:42:04.764947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.913 [2024-11-06 15:42:04.764990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.914 [2024-11-06 15:42:04.765006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.914 [2024-11-06 15:42:04.765013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.914 [2024-11-06 15:42:04.765023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.914 [2024-11-06 15:42:04.765037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.914 qpair failed and we were unable to recover it. 00:29:46.914 [2024-11-06 15:42:04.775087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.914 [2024-11-06 15:42:04.775164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.914 [2024-11-06 15:42:04.775178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.914 [2024-11-06 15:42:04.775185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.914 [2024-11-06 15:42:04.775191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.914 [2024-11-06 15:42:04.775205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.914 qpair failed and we were unable to recover it. 00:29:46.914 [2024-11-06 15:42:04.785005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.914 [2024-11-06 15:42:04.785053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.914 [2024-11-06 15:42:04.785066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.914 [2024-11-06 15:42:04.785073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.914 [2024-11-06 15:42:04.785079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.914 [2024-11-06 15:42:04.785092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.914 qpair failed and we were unable to recover it. 00:29:46.914 [2024-11-06 15:42:04.795147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.914 [2024-11-06 15:42:04.795192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.914 [2024-11-06 15:42:04.795205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.914 [2024-11-06 15:42:04.795212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.914 [2024-11-06 15:42:04.795218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.914 [2024-11-06 15:42:04.795231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.914 qpair failed and we were unable to recover it. 00:29:46.914 [2024-11-06 15:42:04.805184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.914 [2024-11-06 15:42:04.805257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.914 [2024-11-06 15:42:04.805270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.914 [2024-11-06 15:42:04.805276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.914 [2024-11-06 15:42:04.805283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.914 [2024-11-06 15:42:04.805295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.914 qpair failed and we were unable to recover it. 00:29:46.914 [2024-11-06 15:42:04.815213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.914 [2024-11-06 15:42:04.815261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.914 [2024-11-06 15:42:04.815273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.914 [2024-11-06 15:42:04.815280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.914 [2024-11-06 15:42:04.815286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.914 [2024-11-06 15:42:04.815299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.914 qpair failed and we were unable to recover it. 00:29:46.914 [2024-11-06 15:42:04.825257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.914 [2024-11-06 15:42:04.825306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.914 [2024-11-06 15:42:04.825319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.914 [2024-11-06 15:42:04.825326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.914 [2024-11-06 15:42:04.825332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.914 [2024-11-06 15:42:04.825345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.914 qpair failed and we were unable to recover it. 00:29:46.914 [2024-11-06 15:42:04.835265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.914 [2024-11-06 15:42:04.835311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.914 [2024-11-06 15:42:04.835324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.914 [2024-11-06 15:42:04.835331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.914 [2024-11-06 15:42:04.835337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.914 [2024-11-06 15:42:04.835351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.914 qpair failed and we were unable to recover it. 00:29:46.914 [2024-11-06 15:42:04.845263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.914 [2024-11-06 15:42:04.845303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.914 [2024-11-06 15:42:04.845318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.914 [2024-11-06 15:42:04.845324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.914 [2024-11-06 15:42:04.845331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.914 [2024-11-06 15:42:04.845343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.914 qpair failed and we were unable to recover it. 00:29:46.914 [2024-11-06 15:42:04.855313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.914 [2024-11-06 15:42:04.855367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.914 [2024-11-06 15:42:04.855383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.914 [2024-11-06 15:42:04.855390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.914 [2024-11-06 15:42:04.855396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.914 [2024-11-06 15:42:04.855409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.914 qpair failed and we were unable to recover it. 00:29:46.914 [2024-11-06 15:42:04.865352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.914 [2024-11-06 15:42:04.865400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.914 [2024-11-06 15:42:04.865413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.914 [2024-11-06 15:42:04.865420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.914 [2024-11-06 15:42:04.865426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.914 [2024-11-06 15:42:04.865439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.914 qpair failed and we were unable to recover it. 00:29:46.914 [2024-11-06 15:42:04.875377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.914 [2024-11-06 15:42:04.875421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.914 [2024-11-06 15:42:04.875434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.914 [2024-11-06 15:42:04.875441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.914 [2024-11-06 15:42:04.875447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.914 [2024-11-06 15:42:04.875460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.914 qpair failed and we were unable to recover it. 00:29:46.914 [2024-11-06 15:42:04.885402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.915 [2024-11-06 15:42:04.885450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.915 [2024-11-06 15:42:04.885474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.915 [2024-11-06 15:42:04.885483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.915 [2024-11-06 15:42:04.885490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:46.915 [2024-11-06 15:42:04.885508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.915 qpair failed and we were unable to recover it. 00:29:47.176 [2024-11-06 15:42:04.895441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.176 [2024-11-06 15:42:04.895490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.176 [2024-11-06 15:42:04.895505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.176 [2024-11-06 15:42:04.895512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.176 [2024-11-06 15:42:04.895524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.176 [2024-11-06 15:42:04.895538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.176 qpair failed and we were unable to recover it. 00:29:47.176 [2024-11-06 15:42:04.905381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.176 [2024-11-06 15:42:04.905429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.176 [2024-11-06 15:42:04.905443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.176 [2024-11-06 15:42:04.905450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.177 [2024-11-06 15:42:04.905456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.177 [2024-11-06 15:42:04.905470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.177 qpair failed and we were unable to recover it. 00:29:47.177 [2024-11-06 15:42:04.915466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.177 [2024-11-06 15:42:04.915511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.177 [2024-11-06 15:42:04.915536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.177 [2024-11-06 15:42:04.915545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.177 [2024-11-06 15:42:04.915551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.177 [2024-11-06 15:42:04.915569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.177 qpair failed and we were unable to recover it. 00:29:47.177 [2024-11-06 15:42:04.925523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.177 [2024-11-06 15:42:04.925575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.177 [2024-11-06 15:42:04.925599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.177 [2024-11-06 15:42:04.925608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.177 [2024-11-06 15:42:04.925615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.177 [2024-11-06 15:42:04.925634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.177 qpair failed and we were unable to recover it. 00:29:47.177 [2024-11-06 15:42:04.935558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.177 [2024-11-06 15:42:04.935607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.177 [2024-11-06 15:42:04.935622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.177 [2024-11-06 15:42:04.935629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.177 [2024-11-06 15:42:04.935635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.177 [2024-11-06 15:42:04.935649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.177 qpair failed and we were unable to recover it. 00:29:47.177 [2024-11-06 15:42:04.945447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.177 [2024-11-06 15:42:04.945498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.177 [2024-11-06 15:42:04.945512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.177 [2024-11-06 15:42:04.945519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.177 [2024-11-06 15:42:04.945526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.177 [2024-11-06 15:42:04.945539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.177 qpair failed and we were unable to recover it. 00:29:47.177 [2024-11-06 15:42:04.955599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.177 [2024-11-06 15:42:04.955686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.177 [2024-11-06 15:42:04.955699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.177 [2024-11-06 15:42:04.955706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.177 [2024-11-06 15:42:04.955712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.177 [2024-11-06 15:42:04.955726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.177 qpair failed and we were unable to recover it. 00:29:47.177 [2024-11-06 15:42:04.965480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.177 [2024-11-06 15:42:04.965550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.177 [2024-11-06 15:42:04.965563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.177 [2024-11-06 15:42:04.965570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.177 [2024-11-06 15:42:04.965576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.177 [2024-11-06 15:42:04.965589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.177 qpair failed and we were unable to recover it. 00:29:47.177 [2024-11-06 15:42:04.975656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.177 [2024-11-06 15:42:04.975702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.177 [2024-11-06 15:42:04.975715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.177 [2024-11-06 15:42:04.975722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.177 [2024-11-06 15:42:04.975728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.177 [2024-11-06 15:42:04.975741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.177 qpair failed and we were unable to recover it. 00:29:47.177 [2024-11-06 15:42:04.985655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.177 [2024-11-06 15:42:04.985702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.177 [2024-11-06 15:42:04.985718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.177 [2024-11-06 15:42:04.985725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.177 [2024-11-06 15:42:04.985731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.177 [2024-11-06 15:42:04.985755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.177 qpair failed and we were unable to recover it. 00:29:47.177 [2024-11-06 15:42:04.995704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.177 [2024-11-06 15:42:04.995749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.177 [2024-11-06 15:42:04.995762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.177 [2024-11-06 15:42:04.995769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.177 [2024-11-06 15:42:04.995775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.177 [2024-11-06 15:42:04.995788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.177 qpair failed and we were unable to recover it. 00:29:47.177 [2024-11-06 15:42:05.005727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.177 [2024-11-06 15:42:05.005773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.177 [2024-11-06 15:42:05.005788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.177 [2024-11-06 15:42:05.005795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.177 [2024-11-06 15:42:05.005801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.177 [2024-11-06 15:42:05.005815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.177 qpair failed and we were unable to recover it. 00:29:47.177 [2024-11-06 15:42:05.015756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.177 [2024-11-06 15:42:05.015803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.177 [2024-11-06 15:42:05.015816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.177 [2024-11-06 15:42:05.015823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.177 [2024-11-06 15:42:05.015829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.177 [2024-11-06 15:42:05.015843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.177 qpair failed and we were unable to recover it. 00:29:47.177 [2024-11-06 15:42:05.025786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.177 [2024-11-06 15:42:05.025871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.177 [2024-11-06 15:42:05.025884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.178 [2024-11-06 15:42:05.025891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.178 [2024-11-06 15:42:05.025901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.178 [2024-11-06 15:42:05.025914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.178 qpair failed and we were unable to recover it. 00:29:47.178 [2024-11-06 15:42:05.035778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.178 [2024-11-06 15:42:05.035822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.178 [2024-11-06 15:42:05.035835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.178 [2024-11-06 15:42:05.035842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.178 [2024-11-06 15:42:05.035848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.178 [2024-11-06 15:42:05.035861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.178 qpair failed and we were unable to recover it. 00:29:47.178 [2024-11-06 15:42:05.045843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.178 [2024-11-06 15:42:05.045888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.178 [2024-11-06 15:42:05.045902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.178 [2024-11-06 15:42:05.045909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.178 [2024-11-06 15:42:05.045915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.178 [2024-11-06 15:42:05.045930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.178 qpair failed and we were unable to recover it. 00:29:47.178 [2024-11-06 15:42:05.055844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.178 [2024-11-06 15:42:05.055892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.178 [2024-11-06 15:42:05.055904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.178 [2024-11-06 15:42:05.055911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.178 [2024-11-06 15:42:05.055917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.178 [2024-11-06 15:42:05.055930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.178 qpair failed and we were unable to recover it. 00:29:47.178 [2024-11-06 15:42:05.065924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.178 [2024-11-06 15:42:05.065996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.178 [2024-11-06 15:42:05.066009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.178 [2024-11-06 15:42:05.066016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.178 [2024-11-06 15:42:05.066023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.178 [2024-11-06 15:42:05.066035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.178 qpair failed and we were unable to recover it. 00:29:47.178 [2024-11-06 15:42:05.075889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.178 [2024-11-06 15:42:05.075935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.178 [2024-11-06 15:42:05.075947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.178 [2024-11-06 15:42:05.075954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.178 [2024-11-06 15:42:05.075960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.178 [2024-11-06 15:42:05.075973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.178 qpair failed and we were unable to recover it. 00:29:47.178 [2024-11-06 15:42:05.085818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.178 [2024-11-06 15:42:05.085857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.178 [2024-11-06 15:42:05.085871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.178 [2024-11-06 15:42:05.085877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.178 [2024-11-06 15:42:05.085883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.178 [2024-11-06 15:42:05.085897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.178 qpair failed and we were unable to recover it. 00:29:47.178 [2024-11-06 15:42:05.095984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.178 [2024-11-06 15:42:05.096027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.178 [2024-11-06 15:42:05.096040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.178 [2024-11-06 15:42:05.096047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.178 [2024-11-06 15:42:05.096053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.178 [2024-11-06 15:42:05.096066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.178 qpair failed and we were unable to recover it. 00:29:47.178 [2024-11-06 15:42:05.106053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.178 [2024-11-06 15:42:05.106129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.178 [2024-11-06 15:42:05.106142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.178 [2024-11-06 15:42:05.106148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.178 [2024-11-06 15:42:05.106154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.178 [2024-11-06 15:42:05.106167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.178 qpair failed and we were unable to recover it. 00:29:47.178 [2024-11-06 15:42:05.116076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.178 [2024-11-06 15:42:05.116120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.178 [2024-11-06 15:42:05.116136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.178 [2024-11-06 15:42:05.116143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.178 [2024-11-06 15:42:05.116149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.178 [2024-11-06 15:42:05.116162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.178 qpair failed and we were unable to recover it. 00:29:47.178 [2024-11-06 15:42:05.126091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.178 [2024-11-06 15:42:05.126137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.178 [2024-11-06 15:42:05.126150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.178 [2024-11-06 15:42:05.126158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.178 [2024-11-06 15:42:05.126164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.178 [2024-11-06 15:42:05.126178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.178 qpair failed and we were unable to recover it. 00:29:47.178 [2024-11-06 15:42:05.135997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.178 [2024-11-06 15:42:05.136089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.178 [2024-11-06 15:42:05.136102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.178 [2024-11-06 15:42:05.136109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.178 [2024-11-06 15:42:05.136115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.178 [2024-11-06 15:42:05.136128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.178 qpair failed and we were unable to recover it. 00:29:47.178 [2024-11-06 15:42:05.146004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.178 [2024-11-06 15:42:05.146067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.178 [2024-11-06 15:42:05.146080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.178 [2024-11-06 15:42:05.146086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.178 [2024-11-06 15:42:05.146093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.178 [2024-11-06 15:42:05.146105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.178 qpair failed and we were unable to recover it. 00:29:47.178 [2024-11-06 15:42:05.156127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.178 [2024-11-06 15:42:05.156172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.178 [2024-11-06 15:42:05.156185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.178 [2024-11-06 15:42:05.156192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.178 [2024-11-06 15:42:05.156202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.179 [2024-11-06 15:42:05.156215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.179 qpair failed and we were unable to recover it. 00:29:47.442 [2024-11-06 15:42:05.166154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.442 [2024-11-06 15:42:05.166198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.442 [2024-11-06 15:42:05.166211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.442 [2024-11-06 15:42:05.166218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.442 [2024-11-06 15:42:05.166224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.442 [2024-11-06 15:42:05.166237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.442 qpair failed and we were unable to recover it. 00:29:47.442 [2024-11-06 15:42:05.176193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.442 [2024-11-06 15:42:05.176236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.442 [2024-11-06 15:42:05.176249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.442 [2024-11-06 15:42:05.176256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.442 [2024-11-06 15:42:05.176262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.442 [2024-11-06 15:42:05.176275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.442 qpair failed and we were unable to recover it. 00:29:47.442 [2024-11-06 15:42:05.186213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.442 [2024-11-06 15:42:05.186259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.442 [2024-11-06 15:42:05.186272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.442 [2024-11-06 15:42:05.186279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.442 [2024-11-06 15:42:05.186285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.442 [2024-11-06 15:42:05.186298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.442 qpair failed and we were unable to recover it. 00:29:47.442 [2024-11-06 15:42:05.196251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.442 [2024-11-06 15:42:05.196296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.442 [2024-11-06 15:42:05.196309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.442 [2024-11-06 15:42:05.196315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.442 [2024-11-06 15:42:05.196322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.442 [2024-11-06 15:42:05.196334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.442 qpair failed and we were unable to recover it. 00:29:47.442 [2024-11-06 15:42:05.206215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.442 [2024-11-06 15:42:05.206262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.442 [2024-11-06 15:42:05.206275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.442 [2024-11-06 15:42:05.206282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.442 [2024-11-06 15:42:05.206288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.442 [2024-11-06 15:42:05.206301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.442 qpair failed and we were unable to recover it. 00:29:47.442 [2024-11-06 15:42:05.216272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.442 [2024-11-06 15:42:05.216317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.442 [2024-11-06 15:42:05.216330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.442 [2024-11-06 15:42:05.216336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.442 [2024-11-06 15:42:05.216343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.442 [2024-11-06 15:42:05.216355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.442 qpair failed and we were unable to recover it. 00:29:47.442 [2024-11-06 15:42:05.226313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.442 [2024-11-06 15:42:05.226363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.442 [2024-11-06 15:42:05.226375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.442 [2024-11-06 15:42:05.226382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.442 [2024-11-06 15:42:05.226388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.442 [2024-11-06 15:42:05.226401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.442 qpair failed and we were unable to recover it. 00:29:47.442 [2024-11-06 15:42:05.236292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.442 [2024-11-06 15:42:05.236333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.442 [2024-11-06 15:42:05.236346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.442 [2024-11-06 15:42:05.236352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.442 [2024-11-06 15:42:05.236359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.442 [2024-11-06 15:42:05.236372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.442 qpair failed and we were unable to recover it. 00:29:47.442 [2024-11-06 15:42:05.246398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.442 [2024-11-06 15:42:05.246468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.442 [2024-11-06 15:42:05.246484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.442 [2024-11-06 15:42:05.246491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.442 [2024-11-06 15:42:05.246497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.442 [2024-11-06 15:42:05.246510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.442 qpair failed and we were unable to recover it. 00:29:47.442 [2024-11-06 15:42:05.256399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.442 [2024-11-06 15:42:05.256450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.442 [2024-11-06 15:42:05.256475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.442 [2024-11-06 15:42:05.256484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.442 [2024-11-06 15:42:05.256491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.442 [2024-11-06 15:42:05.256509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.442 qpair failed and we were unable to recover it. 00:29:47.442 [2024-11-06 15:42:05.266439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.442 [2024-11-06 15:42:05.266489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.442 [2024-11-06 15:42:05.266507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.442 [2024-11-06 15:42:05.266515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.442 [2024-11-06 15:42:05.266521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.442 [2024-11-06 15:42:05.266536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.442 qpair failed and we were unable to recover it. 00:29:47.442 [2024-11-06 15:42:05.276406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.442 [2024-11-06 15:42:05.276456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.442 [2024-11-06 15:42:05.276470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.442 [2024-11-06 15:42:05.276477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.442 [2024-11-06 15:42:05.276483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.442 [2024-11-06 15:42:05.276496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.442 qpair failed and we were unable to recover it. 00:29:47.442 [2024-11-06 15:42:05.286473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.442 [2024-11-06 15:42:05.286515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.442 [2024-11-06 15:42:05.286528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.442 [2024-11-06 15:42:05.286535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.442 [2024-11-06 15:42:05.286546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.443 [2024-11-06 15:42:05.286560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.443 qpair failed and we were unable to recover it. 00:29:47.443 [2024-11-06 15:42:05.296504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.443 [2024-11-06 15:42:05.296585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.443 [2024-11-06 15:42:05.296598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.443 [2024-11-06 15:42:05.296605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.443 [2024-11-06 15:42:05.296611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.443 [2024-11-06 15:42:05.296625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.443 qpair failed and we were unable to recover it. 00:29:47.443 [2024-11-06 15:42:05.306549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.443 [2024-11-06 15:42:05.306599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.443 [2024-11-06 15:42:05.306611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.443 [2024-11-06 15:42:05.306618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.443 [2024-11-06 15:42:05.306625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.443 [2024-11-06 15:42:05.306638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.443 qpair failed and we were unable to recover it. 00:29:47.443 [2024-11-06 15:42:05.316545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.443 [2024-11-06 15:42:05.316587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.443 [2024-11-06 15:42:05.316600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.443 [2024-11-06 15:42:05.316607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.443 [2024-11-06 15:42:05.316613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.443 [2024-11-06 15:42:05.316627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.443 qpair failed and we were unable to recover it. 00:29:47.443 [2024-11-06 15:42:05.326577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.443 [2024-11-06 15:42:05.326620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.443 [2024-11-06 15:42:05.326633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.443 [2024-11-06 15:42:05.326639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.443 [2024-11-06 15:42:05.326646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.443 [2024-11-06 15:42:05.326659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.443 qpair failed and we were unable to recover it. 00:29:47.443 [2024-11-06 15:42:05.336527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.443 [2024-11-06 15:42:05.336583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.443 [2024-11-06 15:42:05.336596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.443 [2024-11-06 15:42:05.336603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.443 [2024-11-06 15:42:05.336609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.443 [2024-11-06 15:42:05.336622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.443 qpair failed and we were unable to recover it. 00:29:47.443 [2024-11-06 15:42:05.346646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.443 [2024-11-06 15:42:05.346692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.443 [2024-11-06 15:42:05.346706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.443 [2024-11-06 15:42:05.346712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.443 [2024-11-06 15:42:05.346719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.443 [2024-11-06 15:42:05.346732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.443 qpair failed and we were unable to recover it. 00:29:47.443 [2024-11-06 15:42:05.356664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.443 [2024-11-06 15:42:05.356711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.443 [2024-11-06 15:42:05.356724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.443 [2024-11-06 15:42:05.356730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.443 [2024-11-06 15:42:05.356737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.443 [2024-11-06 15:42:05.356754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.443 qpair failed and we were unable to recover it. 00:29:47.443 [2024-11-06 15:42:05.366661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.443 [2024-11-06 15:42:05.366705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.443 [2024-11-06 15:42:05.366718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.443 [2024-11-06 15:42:05.366724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.443 [2024-11-06 15:42:05.366731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.443 [2024-11-06 15:42:05.366744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.443 qpair failed and we were unable to recover it. 00:29:47.443 [2024-11-06 15:42:05.376629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.443 [2024-11-06 15:42:05.376673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.443 [2024-11-06 15:42:05.376689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.443 [2024-11-06 15:42:05.376696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.443 [2024-11-06 15:42:05.376702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.443 [2024-11-06 15:42:05.376715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.443 qpair failed and we were unable to recover it. 00:29:47.443 [2024-11-06 15:42:05.386751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.443 [2024-11-06 15:42:05.386799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.443 [2024-11-06 15:42:05.386812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.443 [2024-11-06 15:42:05.386818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.443 [2024-11-06 15:42:05.386825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.443 [2024-11-06 15:42:05.386838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.443 qpair failed and we were unable to recover it. 00:29:47.443 [2024-11-06 15:42:05.396771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.443 [2024-11-06 15:42:05.396848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.443 [2024-11-06 15:42:05.396861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.443 [2024-11-06 15:42:05.396868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.443 [2024-11-06 15:42:05.396874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.443 [2024-11-06 15:42:05.396887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.443 qpair failed and we were unable to recover it. 00:29:47.443 [2024-11-06 15:42:05.406797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.443 [2024-11-06 15:42:05.406872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.443 [2024-11-06 15:42:05.406885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.443 [2024-11-06 15:42:05.406892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.443 [2024-11-06 15:42:05.406898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.443 [2024-11-06 15:42:05.406911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.443 qpair failed and we were unable to recover it. 00:29:47.443 [2024-11-06 15:42:05.416825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.443 [2024-11-06 15:42:05.416869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.443 [2024-11-06 15:42:05.416882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.443 [2024-11-06 15:42:05.416889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.443 [2024-11-06 15:42:05.416899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.443 [2024-11-06 15:42:05.416912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.444 qpair failed and we were unable to recover it. 00:29:47.704 [2024-11-06 15:42:05.426849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.704 [2024-11-06 15:42:05.426898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.704 [2024-11-06 15:42:05.426911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.704 [2024-11-06 15:42:05.426918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.704 [2024-11-06 15:42:05.426924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.704 [2024-11-06 15:42:05.426937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.704 qpair failed and we were unable to recover it. 00:29:47.704 [2024-11-06 15:42:05.436879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.704 [2024-11-06 15:42:05.436924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.704 [2024-11-06 15:42:05.436937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.704 [2024-11-06 15:42:05.436944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.704 [2024-11-06 15:42:05.436950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.704 [2024-11-06 15:42:05.436963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.704 qpair failed and we were unable to recover it. 00:29:47.704 [2024-11-06 15:42:05.446890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.704 [2024-11-06 15:42:05.446933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.704 [2024-11-06 15:42:05.446945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.704 [2024-11-06 15:42:05.446952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.704 [2024-11-06 15:42:05.446958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.704 [2024-11-06 15:42:05.446971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.704 qpair failed and we were unable to recover it. 00:29:47.704 [2024-11-06 15:42:05.456931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.704 [2024-11-06 15:42:05.456975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.704 [2024-11-06 15:42:05.456988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.704 [2024-11-06 15:42:05.456995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.704 [2024-11-06 15:42:05.457001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.704 [2024-11-06 15:42:05.457014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.704 qpair failed and we were unable to recover it. 00:29:47.704 [2024-11-06 15:42:05.466999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.704 [2024-11-06 15:42:05.467047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.704 [2024-11-06 15:42:05.467061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.704 [2024-11-06 15:42:05.467068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.704 [2024-11-06 15:42:05.467074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.704 [2024-11-06 15:42:05.467088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.704 qpair failed and we were unable to recover it. 00:29:47.704 [2024-11-06 15:42:05.476971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.704 [2024-11-06 15:42:05.477015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.704 [2024-11-06 15:42:05.477028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.704 [2024-11-06 15:42:05.477035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.704 [2024-11-06 15:42:05.477041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.704 [2024-11-06 15:42:05.477054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.704 qpair failed and we were unable to recover it. 00:29:47.704 [2024-11-06 15:42:05.486992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.704 [2024-11-06 15:42:05.487052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.704 [2024-11-06 15:42:05.487065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.704 [2024-11-06 15:42:05.487071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.704 [2024-11-06 15:42:05.487078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x131f010 00:29:47.704 [2024-11-06 15:42:05.487090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.704 qpair failed and we were unable to recover it. 00:29:47.704 [2024-11-06 15:42:05.487234] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:29:47.704 A controller has encountered a failure and is being reset. 00:29:47.704 [2024-11-06 15:42:05.487346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x132cf30 (9): Bad file descriptor 00:29:47.704 Controller properly reset. 00:29:47.705 Initializing NVMe Controllers 00:29:47.705 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:47.705 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:47.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:47.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:47.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:47.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:47.705 Initialization complete. Launching workers. 00:29:47.705 Starting thread on core 1 00:29:47.705 Starting thread on core 2 00:29:47.705 Starting thread on core 3 00:29:47.705 Starting thread on core 0 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:47.705 00:29:47.705 real 0m11.515s 00:29:47.705 user 0m21.721s 00:29:47.705 sys 0m3.886s 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:47.705 ************************************ 00:29:47.705 END TEST nvmf_target_disconnect_tc2 00:29:47.705 ************************************ 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:47.705 rmmod nvme_tcp 00:29:47.705 rmmod nvme_fabrics 00:29:47.705 rmmod nvme_keyring 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3968432 ']' 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3968432 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3968432 ']' 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 3968432 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:47.705 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3968432 00:29:47.965 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:29:47.965 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:29:47.965 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3968432' 00:29:47.965 killing process with pid 3968432 00:29:47.965 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 3968432 00:29:47.965 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 3968432 00:29:47.965 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:47.965 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:47.965 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:47.965 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:47.965 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:29:47.965 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:47.965 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:29:47.965 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:47.965 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:47.965 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.965 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.965 15:42:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.510 15:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:50.510 00:29:50.510 real 0m22.047s 00:29:50.510 user 0m49.862s 00:29:50.510 sys 0m10.182s 00:29:50.510 15:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:50.510 15:42:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:50.510 ************************************ 00:29:50.510 END TEST nvmf_target_disconnect 00:29:50.510 ************************************ 00:29:50.510 15:42:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:50.510 00:29:50.510 real 6m35.337s 00:29:50.510 user 11m20.719s 00:29:50.510 sys 2m17.732s 00:29:50.510 15:42:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:50.510 15:42:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.510 ************************************ 00:29:50.510 END TEST nvmf_host 00:29:50.510 ************************************ 00:29:50.510 15:42:08 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:50.510 15:42:08 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:50.510 15:42:08 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:50.510 15:42:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:50.510 15:42:08 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:50.510 15:42:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:50.510 ************************************ 00:29:50.510 START TEST nvmf_target_core_interrupt_mode 00:29:50.510 ************************************ 00:29:50.510 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:50.511 * Looking for test storage... 00:29:50.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:50.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.511 --rc genhtml_branch_coverage=1 00:29:50.511 --rc genhtml_function_coverage=1 00:29:50.511 --rc genhtml_legend=1 00:29:50.511 --rc geninfo_all_blocks=1 00:29:50.511 --rc geninfo_unexecuted_blocks=1 00:29:50.511 00:29:50.511 ' 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:50.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.511 --rc genhtml_branch_coverage=1 00:29:50.511 --rc genhtml_function_coverage=1 00:29:50.511 --rc genhtml_legend=1 00:29:50.511 --rc geninfo_all_blocks=1 00:29:50.511 --rc geninfo_unexecuted_blocks=1 00:29:50.511 00:29:50.511 ' 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:50.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.511 --rc genhtml_branch_coverage=1 00:29:50.511 --rc genhtml_function_coverage=1 00:29:50.511 --rc genhtml_legend=1 00:29:50.511 --rc geninfo_all_blocks=1 00:29:50.511 --rc geninfo_unexecuted_blocks=1 00:29:50.511 00:29:50.511 ' 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:50.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.511 --rc genhtml_branch_coverage=1 00:29:50.511 --rc genhtml_function_coverage=1 00:29:50.511 --rc genhtml_legend=1 00:29:50.511 --rc geninfo_all_blocks=1 00:29:50.511 --rc geninfo_unexecuted_blocks=1 00:29:50.511 00:29:50.511 ' 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:50.511 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:50.512 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:50.512 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:50.512 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:50.512 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:50.512 ************************************ 00:29:50.512 START TEST nvmf_abort 00:29:50.512 ************************************ 00:29:50.512 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:50.512 * Looking for test storage... 00:29:50.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:50.512 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:50.512 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:29:50.512 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:50.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.773 --rc genhtml_branch_coverage=1 00:29:50.773 --rc genhtml_function_coverage=1 00:29:50.773 --rc genhtml_legend=1 00:29:50.773 --rc geninfo_all_blocks=1 00:29:50.773 --rc geninfo_unexecuted_blocks=1 00:29:50.773 00:29:50.773 ' 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:50.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.773 --rc genhtml_branch_coverage=1 00:29:50.773 --rc genhtml_function_coverage=1 00:29:50.773 --rc genhtml_legend=1 00:29:50.773 --rc geninfo_all_blocks=1 00:29:50.773 --rc geninfo_unexecuted_blocks=1 00:29:50.773 00:29:50.773 ' 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:50.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.773 --rc genhtml_branch_coverage=1 00:29:50.773 --rc genhtml_function_coverage=1 00:29:50.773 --rc genhtml_legend=1 00:29:50.773 --rc geninfo_all_blocks=1 00:29:50.773 --rc geninfo_unexecuted_blocks=1 00:29:50.773 00:29:50.773 ' 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:50.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.773 --rc genhtml_branch_coverage=1 00:29:50.773 --rc genhtml_function_coverage=1 00:29:50.773 --rc genhtml_legend=1 00:29:50.773 --rc geninfo_all_blocks=1 00:29:50.773 --rc geninfo_unexecuted_blocks=1 00:29:50.773 00:29:50.773 ' 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.773 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:50.774 15:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:58.962 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:58.962 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:58.962 Found net devices under 0000:31:00.0: cvl_0_0 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:58.962 Found net devices under 0000:31:00.1: cvl_0_1 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:58.962 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:58.963 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:58.963 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:58.963 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:58.963 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:58.963 15:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:58.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:58.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:29:58.963 00:29:58.963 --- 10.0.0.2 ping statistics --- 00:29:58.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.963 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:58.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:58.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:29:58.963 00:29:58.963 --- 10.0.0.1 ping statistics --- 00:29:58.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.963 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3974612 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3974612 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 3974612 ']' 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:58.963 15:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:58.963 [2024-11-06 15:42:16.272101] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:58.963 [2024-11-06 15:42:16.273250] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:29:58.963 [2024-11-06 15:42:16.273302] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.963 [2024-11-06 15:42:16.375053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:58.963 [2024-11-06 15:42:16.425970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.963 [2024-11-06 15:42:16.426024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.963 [2024-11-06 15:42:16.426033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.963 [2024-11-06 15:42:16.426040] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.963 [2024-11-06 15:42:16.426046] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.963 [2024-11-06 15:42:16.427965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:58.963 [2024-11-06 15:42:16.428193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:58.963 [2024-11-06 15:42:16.428195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.963 [2024-11-06 15:42:16.506762] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:58.963 [2024-11-06 15:42:16.507911] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:58.963 [2024-11-06 15:42:16.508469] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:58.963 [2024-11-06 15:42:16.508602] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:59.224 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:59.224 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:29:59.224 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:59.224 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:59.224 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:59.224 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.224 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:59.224 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.224 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:59.224 [2024-11-06 15:42:17.149262] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.224 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.224 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:59.224 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.224 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:59.485 Malloc0 00:29:59.485 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.485 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:59.485 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.485 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:59.486 Delay0 00:29:59.486 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.486 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:59.486 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.486 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:59.486 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.486 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:59.486 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.486 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:59.486 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.486 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:59.486 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.486 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:59.486 [2024-11-06 15:42:17.253223] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.486 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.486 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:59.486 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.486 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:59.486 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.486 15:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:59.486 [2024-11-06 15:42:17.355351] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:02.030 Initializing NVMe Controllers 00:30:02.030 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:02.030 controller IO queue size 128 less than required 00:30:02.030 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:02.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:02.030 Initialization complete. Launching workers. 00:30:02.030 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27420 00:30:02.030 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27477, failed to submit 66 00:30:02.030 success 27420, unsuccessful 57, failed 0 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:02.030 rmmod nvme_tcp 00:30:02.030 rmmod nvme_fabrics 00:30:02.030 rmmod nvme_keyring 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3974612 ']' 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3974612 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 3974612 ']' 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 3974612 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3974612 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3974612' 00:30:02.030 killing process with pid 3974612 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 3974612 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 3974612 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:02.030 15:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.943 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:03.943 00:30:03.943 real 0m13.534s 00:30:03.943 user 0m10.829s 00:30:03.943 sys 0m7.207s 00:30:03.943 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:03.943 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:03.943 ************************************ 00:30:03.943 END TEST nvmf_abort 00:30:03.943 ************************************ 00:30:03.943 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:03.943 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:03.943 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:04.204 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:04.204 ************************************ 00:30:04.204 START TEST nvmf_ns_hotplug_stress 00:30:04.204 ************************************ 00:30:04.204 15:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:04.204 * Looking for test storage... 00:30:04.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:04.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.204 --rc genhtml_branch_coverage=1 00:30:04.204 --rc genhtml_function_coverage=1 00:30:04.204 --rc genhtml_legend=1 00:30:04.204 --rc geninfo_all_blocks=1 00:30:04.204 --rc geninfo_unexecuted_blocks=1 00:30:04.204 00:30:04.204 ' 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:04.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.204 --rc genhtml_branch_coverage=1 00:30:04.204 --rc genhtml_function_coverage=1 00:30:04.204 --rc genhtml_legend=1 00:30:04.204 --rc geninfo_all_blocks=1 00:30:04.204 --rc geninfo_unexecuted_blocks=1 00:30:04.204 00:30:04.204 ' 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:04.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.204 --rc genhtml_branch_coverage=1 00:30:04.204 --rc genhtml_function_coverage=1 00:30:04.204 --rc genhtml_legend=1 00:30:04.204 --rc geninfo_all_blocks=1 00:30:04.204 --rc geninfo_unexecuted_blocks=1 00:30:04.204 00:30:04.204 ' 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:04.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.204 --rc genhtml_branch_coverage=1 00:30:04.204 --rc genhtml_function_coverage=1 00:30:04.204 --rc genhtml_legend=1 00:30:04.204 --rc geninfo_all_blocks=1 00:30:04.204 --rc geninfo_unexecuted_blocks=1 00:30:04.204 00:30:04.204 ' 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:04.204 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:04.465 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:04.465 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:04.465 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:04.465 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:04.465 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:04.465 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:04.465 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:04.465 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:04.465 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:04.465 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.465 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.465 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.465 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.465 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.465 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:04.466 15:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:12.601 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:12.601 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:12.601 Found net devices under 0000:31:00.0: cvl_0_0 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:12.601 Found net devices under 0000:31:00.1: cvl_0_1 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:12.601 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:12.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:12.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:30:12.602 00:30:12.602 --- 10.0.0.2 ping statistics --- 00:30:12.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.602 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:12.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:12.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:30:12.602 00:30:12.602 --- 10.0.0.1 ping statistics --- 00:30:12.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.602 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3979359 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3979359 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 3979359 ']' 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:12.602 15:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:12.602 [2024-11-06 15:42:29.903291] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:12.602 [2024-11-06 15:42:29.904448] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:30:12.602 [2024-11-06 15:42:29.904500] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.602 [2024-11-06 15:42:30.006237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:12.602 [2024-11-06 15:42:30.060970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.602 [2024-11-06 15:42:30.061024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.602 [2024-11-06 15:42:30.061033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.602 [2024-11-06 15:42:30.061040] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.602 [2024-11-06 15:42:30.061046] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.602 [2024-11-06 15:42:30.063151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:12.602 [2024-11-06 15:42:30.063295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:12.602 [2024-11-06 15:42:30.063297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.602 [2024-11-06 15:42:30.153665] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:12.602 [2024-11-06 15:42:30.154945] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:12.602 [2024-11-06 15:42:30.155304] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:12.602 [2024-11-06 15:42:30.155474] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:12.863 15:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:12.863 15:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:30:12.863 15:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:12.863 15:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:12.863 15:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:12.863 15:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:12.863 15:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:12.863 15:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:13.124 [2024-11-06 15:42:30.936239] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.124 15:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:13.384 15:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:13.384 [2024-11-06 15:42:31.340883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.645 15:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:13.645 15:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:13.905 Malloc0 00:30:13.905 15:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:14.166 Delay0 00:30:14.166 15:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.444 15:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:14.444 NULL1 00:30:14.444 15:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:14.766 15:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3980031 00:30:14.766 15:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:14.766 15:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:14.766 15:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.057 15:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.057 15:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:15.057 15:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:15.318 true 00:30:15.318 15:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:15.318 15:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.579 15:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.840 15:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:15.840 15:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:15.840 true 00:30:15.840 15:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:15.840 15:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.100 15:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.361 15:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:16.361 15:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:16.621 true 00:30:16.621 15:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:16.621 15:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.882 15:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.882 15:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:16.882 15:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:17.142 true 00:30:17.142 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:17.142 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.403 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.663 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:17.663 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:17.663 true 00:30:17.663 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:17.663 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.924 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.183 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:18.183 15:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:18.183 true 00:30:18.183 15:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:18.183 15:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.443 15:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.702 15:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:18.702 15:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:18.702 true 00:30:18.960 15:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:18.961 15:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.961 15:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:19.220 15:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:19.220 15:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:19.480 true 00:30:19.480 15:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:19.480 15:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.480 15:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:19.741 15:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:19.741 15:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:20.001 true 00:30:20.001 15:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:20.001 15:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.262 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.262 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:20.262 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:20.523 true 00:30:20.523 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:20.523 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.783 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.783 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:20.783 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:21.043 true 00:30:21.043 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:21.043 15:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.303 15:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.563 15:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:21.563 15:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:21.563 true 00:30:21.563 15:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:21.563 15:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.823 15:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.082 15:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:22.082 15:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:22.082 true 00:30:22.082 15:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:22.082 15:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.342 15:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.602 15:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:22.602 15:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:22.602 true 00:30:22.862 15:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:22.862 15:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.862 15:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:23.123 15:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:23.123 15:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:23.383 true 00:30:23.383 15:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:23.383 15:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.383 15:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:23.644 15:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:23.644 15:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:23.904 true 00:30:23.904 15:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:23.904 15:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:24.164 15:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:24.164 15:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:24.164 15:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:24.424 true 00:30:24.424 15:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:24.424 15:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:24.684 15:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:24.684 15:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:24.684 15:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:24.945 true 00:30:24.945 15:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:24.945 15:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.205 15:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.205 15:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:25.205 15:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:25.465 true 00:30:25.466 15:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:25.466 15:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.726 15:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.986 15:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:25.986 15:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:25.986 true 00:30:25.986 15:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:25.986 15:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:26.245 15:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:26.506 15:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:26.506 15:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:26.506 true 00:30:26.506 15:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:26.506 15:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:26.766 15:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.026 15:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:27.026 15:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:27.026 true 00:30:27.286 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:27.286 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.286 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.546 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:27.546 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:27.806 true 00:30:27.806 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:27.806 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.806 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:28.066 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:28.066 15:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:28.327 true 00:30:28.327 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:28.327 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.587 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:28.587 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:28.587 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:28.848 true 00:30:28.848 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:28.848 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.108 15:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.108 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:29.108 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:29.367 true 00:30:29.367 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:29.367 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.627 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.627 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:29.887 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:29.887 true 00:30:29.887 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:29.887 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.148 15:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.409 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:30.409 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:30.409 true 00:30:30.409 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:30.409 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.669 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.929 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:30.929 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:30.929 true 00:30:30.929 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:30.930 15:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.190 15:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.450 15:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:31.450 15:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:31.450 true 00:30:31.710 15:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:31.711 15:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.711 15:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.970 15:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:31.970 15:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:32.230 true 00:30:32.230 15:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:32.230 15:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.230 15:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.490 15:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:32.490 15:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:32.750 true 00:30:32.750 15:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:32.750 15:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.011 15:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.011 15:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:33.011 15:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:33.271 true 00:30:33.271 15:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:33.272 15:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.532 15:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.532 15:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:30:33.532 15:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:30:33.793 true 00:30:33.793 15:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:33.793 15:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.053 15:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.313 15:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:30:34.313 15:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:30:34.313 true 00:30:34.314 15:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:34.314 15:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.573 15:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.834 15:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:30:34.834 15:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:30:34.834 true 00:30:34.834 15:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:34.834 15:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.094 15:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.354 15:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:30:35.354 15:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:30:35.354 true 00:30:35.354 15:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:35.354 15:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.615 15:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.875 15:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:30:35.875 15:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:30:35.875 true 00:30:36.136 15:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:36.136 15:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.136 15:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:36.396 15:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:30:36.396 15:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:30:36.657 true 00:30:36.657 15:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:36.657 15:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.657 15:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:36.918 15:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:30:36.918 15:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:30:37.178 true 00:30:37.179 15:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:37.179 15:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.179 15:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.439 15:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:30:37.439 15:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:30:37.700 true 00:30:37.700 15:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:37.700 15:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.961 15:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.961 15:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:30:37.961 15:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:30:38.222 true 00:30:38.222 15:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:38.222 15:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.482 15:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.743 15:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:30:38.743 15:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:30:38.743 true 00:30:38.743 15:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:38.743 15:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.003 15:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.264 15:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:30:39.264 15:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:30:39.264 true 00:30:39.264 15:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:39.264 15:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.524 15:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.785 15:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:30:39.785 15:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:30:39.785 true 00:30:40.045 15:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:40.045 15:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.045 15:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.305 15:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:30:40.305 15:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:30:40.566 true 00:30:40.566 15:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:40.566 15:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.566 15:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.826 15:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:30:40.826 15:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:30:41.087 true 00:30:41.087 15:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:41.087 15:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.348 15:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.348 15:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:30:41.348 15:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:30:41.608 true 00:30:41.608 15:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:41.608 15:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.868 15:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.868 15:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:30:41.868 15:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:30:42.128 true 00:30:42.128 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:42.128 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:42.389 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:42.650 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:30:42.650 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:30:42.650 true 00:30:42.650 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:42.650 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:42.912 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.172 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:30:43.172 15:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:30:43.172 true 00:30:43.433 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:43.433 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.433 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.693 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:30:43.693 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:30:43.954 true 00:30:43.954 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:43.954 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.954 15:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.215 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:30:44.215 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:30:44.476 true 00:30:44.476 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:44.476 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.737 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.737 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:30:44.737 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:30:44.998 true 00:30:44.998 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:44.998 15:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.998 Initializing NVMe Controllers 00:30:44.998 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:44.998 Controller IO queue size 128, less than required. 00:30:44.998 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:44.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:44.998 Initialization complete. Launching workers. 00:30:44.998 ======================================================== 00:30:44.998 Latency(us) 00:30:44.998 Device Information : IOPS MiB/s Average min max 00:30:44.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30473.87 14.88 4200.23 1083.96 11739.82 00:30:44.998 ======================================================== 00:30:44.998 Total : 30473.87 14.88 4200.23 1083.96 11739.82 00:30:44.998 00:30:45.258 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:45.258 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:30:45.258 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:30:45.519 true 00:30:45.519 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3980031 00:30:45.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3980031) - No such process 00:30:45.519 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3980031 00:30:45.519 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.780 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:46.040 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:46.041 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:46.041 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:46.041 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:46.041 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:46.041 null0 00:30:46.041 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:46.041 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:46.041 15:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:46.301 null1 00:30:46.301 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:46.301 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:46.301 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:46.561 null2 00:30:46.561 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:46.561 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:46.561 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:46.561 null3 00:30:46.561 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:46.561 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:46.561 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:46.822 null4 00:30:46.822 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:46.822 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:46.822 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:47.082 null5 00:30:47.082 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:47.082 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:47.082 15:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:47.082 null6 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:47.343 null7 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:47.343 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3986206 3986208 3986209 3986211 3986213 3986215 3986217 3986219 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.344 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:47.603 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.603 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:47.603 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:47.603 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:47.603 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:47.603 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:47.603 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:47.603 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:47.865 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.127 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:48.127 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:48.127 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:48.127 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.127 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.127 15:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:48.127 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.127 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.127 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:48.127 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.127 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.127 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:48.127 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.127 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.127 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:48.127 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.127 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.127 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:48.127 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.127 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.127 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:48.127 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.127 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.127 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:48.127 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.127 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.127 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:48.386 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:48.386 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.386 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:48.386 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:48.386 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:48.386 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:48.386 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:48.386 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:48.386 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.386 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.386 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.646 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:48.906 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:49.166 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:49.166 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:49.166 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:49.166 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.166 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:49.166 15:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:49.166 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:49.166 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.166 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.166 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:49.166 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.166 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.166 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:49.166 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.166 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.166 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:49.166 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.166 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.166 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:49.426 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.426 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.426 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:49.426 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.426 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.426 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:49.426 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.426 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.426 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:49.426 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.426 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.426 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:49.426 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:49.426 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:49.426 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:49.427 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:49.427 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.427 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.687 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:49.688 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:49.947 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:49.947 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.948 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:49.948 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:49.948 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.948 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.948 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:49.948 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.948 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.948 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.948 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.948 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:49.948 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:49.948 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:49.948 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.948 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.948 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:49.948 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.948 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.948 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:49.948 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.948 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.948 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:50.208 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:50.208 15:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:50.208 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.208 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.208 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:50.208 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.208 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:50.208 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:50.208 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:50.208 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.208 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.208 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:50.208 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.208 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.208 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:50.208 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.208 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.208 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:50.469 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.469 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.469 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:50.469 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:50.469 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.469 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.469 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:50.469 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.469 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.469 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:50.469 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.469 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.469 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:50.469 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:50.469 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:50.469 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.469 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:50.470 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.470 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.470 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:50.730 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:50.991 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.991 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.991 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:50.991 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:50.991 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.991 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.991 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.991 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:50.991 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:50.991 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:50.991 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:50.991 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.991 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.991 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:50.991 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:50.991 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.991 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.991 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.991 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.991 15:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:51.251 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.251 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.251 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.251 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.251 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.251 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.251 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:51.251 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.251 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.251 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.251 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:51.512 rmmod nvme_tcp 00:30:51.512 rmmod nvme_fabrics 00:30:51.512 rmmod nvme_keyring 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3979359 ']' 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3979359 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 3979359 ']' 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 3979359 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3979359 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3979359' 00:30:51.512 killing process with pid 3979359 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 3979359 00:30:51.512 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 3979359 00:30:51.773 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:51.773 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:51.773 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:51.773 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:51.773 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:30:51.773 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:51.773 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:30:51.773 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:51.773 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:51.773 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.773 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:51.773 15:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.684 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:53.684 00:30:53.684 real 0m49.679s 00:30:53.684 user 3m4.444s 00:30:53.684 sys 0m23.080s 00:30:53.684 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:53.684 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:53.684 ************************************ 00:30:53.684 END TEST nvmf_ns_hotplug_stress 00:30:53.684 ************************************ 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:53.945 ************************************ 00:30:53.945 START TEST nvmf_delete_subsystem 00:30:53.945 ************************************ 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:53.945 * Looking for test storage... 00:30:53.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:53.945 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:53.946 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:53.946 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:53.946 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:53.946 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:53.946 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:53.946 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:53.946 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:53.946 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:53.946 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:53.946 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:53.946 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:54.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.206 --rc genhtml_branch_coverage=1 00:30:54.206 --rc genhtml_function_coverage=1 00:30:54.206 --rc genhtml_legend=1 00:30:54.206 --rc geninfo_all_blocks=1 00:30:54.206 --rc geninfo_unexecuted_blocks=1 00:30:54.206 00:30:54.206 ' 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:54.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.206 --rc genhtml_branch_coverage=1 00:30:54.206 --rc genhtml_function_coverage=1 00:30:54.206 --rc genhtml_legend=1 00:30:54.206 --rc geninfo_all_blocks=1 00:30:54.206 --rc geninfo_unexecuted_blocks=1 00:30:54.206 00:30:54.206 ' 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:54.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.206 --rc genhtml_branch_coverage=1 00:30:54.206 --rc genhtml_function_coverage=1 00:30:54.206 --rc genhtml_legend=1 00:30:54.206 --rc geninfo_all_blocks=1 00:30:54.206 --rc geninfo_unexecuted_blocks=1 00:30:54.206 00:30:54.206 ' 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:54.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.206 --rc genhtml_branch_coverage=1 00:30:54.206 --rc genhtml_function_coverage=1 00:30:54.206 --rc genhtml_legend=1 00:30:54.206 --rc geninfo_all_blocks=1 00:30:54.206 --rc geninfo_unexecuted_blocks=1 00:30:54.206 00:30:54.206 ' 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.206 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:54.207 15:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:02.498 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:02.498 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:02.498 Found net devices under 0000:31:00.0: cvl_0_0 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:02.498 Found net devices under 0000:31:00.1: cvl_0_1 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:02.498 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:02.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:02.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:31:02.499 00:31:02.499 --- 10.0.0.2 ping statistics --- 00:31:02.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.499 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:02.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:02.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:31:02.499 00:31:02.499 --- 10.0.0.1 ping statistics --- 00:31:02.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.499 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3991406 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3991406 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3991406 ']' 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:02.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:02.499 15:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.499 [2024-11-06 15:43:19.606584] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:02.499 [2024-11-06 15:43:19.607739] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:31:02.499 [2024-11-06 15:43:19.607796] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:02.499 [2024-11-06 15:43:19.707078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:02.499 [2024-11-06 15:43:19.758300] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:02.499 [2024-11-06 15:43:19.758350] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:02.499 [2024-11-06 15:43:19.758359] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:02.499 [2024-11-06 15:43:19.758366] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:02.499 [2024-11-06 15:43:19.758372] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:02.499 [2024-11-06 15:43:19.760046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.499 [2024-11-06 15:43:19.760147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.499 [2024-11-06 15:43:19.838457] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:02.499 [2024-11-06 15:43:19.838919] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:02.499 [2024-11-06 15:43:19.839280] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:02.499 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:02.499 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:31:02.499 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:02.499 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:02.499 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.499 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:02.499 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:02.499 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.499 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.499 [2024-11-06 15:43:20.473113] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.760 [2024-11-06 15:43:20.505709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.760 NULL1 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.760 Delay0 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3991630 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:02.760 15:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:02.760 [2024-11-06 15:43:20.629822] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:04.675 15:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:04.675 15:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.675 15:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 starting I/O failed: -6 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 starting I/O failed: -6 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 starting I/O failed: -6 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 starting I/O failed: -6 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 starting I/O failed: -6 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 starting I/O failed: -6 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 starting I/O failed: -6 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 starting I/O failed: -6 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 starting I/O failed: -6 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 starting I/O failed: -6 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 starting I/O failed: -6 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 [2024-11-06 15:43:22.814654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1416f00 is same with the state(6) to be set 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 starting I/O failed: -6 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 starting I/O failed: -6 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 starting I/O failed: -6 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 starting I/O failed: -6 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 starting I/O failed: -6 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 starting I/O failed: -6 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 starting I/O failed: -6 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 starting I/O failed: -6 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 starting I/O failed: -6 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 starting I/O failed: -6 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 [2024-11-06 15:43:22.816196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f290c000c40 is same with the state(6) to be set 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.937 Read completed with error (sct=0, sc=8) 00:31:04.937 Write completed with error (sct=0, sc=8) 00:31:04.938 Write completed with error (sct=0, sc=8) 00:31:04.938 Write completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Write completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Write completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Write completed with error (sct=0, sc=8) 00:31:04.938 Write completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Write completed with error (sct=0, sc=8) 00:31:04.938 Write completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Write completed with error (sct=0, sc=8) 00:31:04.938 Write completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Write completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Write completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Write completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:04.938 Write completed with error (sct=0, sc=8) 00:31:04.938 Read completed with error (sct=0, sc=8) 00:31:05.879 [2024-11-06 15:43:23.771031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14185e0 is same with the state(6) to be set 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 [2024-11-06 15:43:23.817916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14170e0 is same with the state(6) to be set 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 [2024-11-06 15:43:23.818759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f290c00d020 is same with the state(6) to be set 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 [2024-11-06 15:43:23.818873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14174a0 is same with the state(6) to be set 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 Read completed with error (sct=0, sc=8) 00:31:05.879 Write completed with error (sct=0, sc=8) 00:31:05.879 [2024-11-06 15:43:23.818969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f290c00d7e0 is same with the state(6) to be set 00:31:05.879 Initializing NVMe Controllers 00:31:05.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:05.879 Controller IO queue size 128, less than required. 00:31:05.879 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:05.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:05.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:05.879 Initialization complete. Launching workers. 00:31:05.879 ======================================================== 00:31:05.879 Latency(us) 00:31:05.879 Device Information : IOPS MiB/s Average min max 00:31:05.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.33 0.08 914044.87 386.55 1012023.75 00:31:05.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.90 0.07 1009662.01 328.71 2002402.68 00:31:05.879 ======================================================== 00:31:05.879 Total : 315.23 0.15 960422.95 328.71 2002402.68 00:31:05.879 00:31:05.879 [2024-11-06 15:43:23.819531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14185e0 (9): Bad file descriptor 00:31:05.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:05.879 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.879 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:05.879 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3991630 00:31:05.879 15:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3991630 00:31:06.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3991630) - No such process 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3991630 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3991630 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3991630 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:06.451 [2024-11-06 15:43:24.353618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3992376 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3992376 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:06.451 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:06.712 [2024-11-06 15:43:24.452534] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:06.972 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:06.972 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3992376 00:31:06.972 15:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:07.542 15:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:07.542 15:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3992376 00:31:07.542 15:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:08.111 15:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:08.111 15:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3992376 00:31:08.111 15:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:08.681 15:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:08.681 15:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3992376 00:31:08.681 15:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:08.941 15:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:08.941 15:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3992376 00:31:08.941 15:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:09.511 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:09.511 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3992376 00:31:09.511 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:09.772 Initializing NVMe Controllers 00:31:09.772 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:09.772 Controller IO queue size 128, less than required. 00:31:09.772 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:09.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:09.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:09.772 Initialization complete. Launching workers. 00:31:09.772 ======================================================== 00:31:09.772 Latency(us) 00:31:09.772 Device Information : IOPS MiB/s Average min max 00:31:09.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002427.85 1000215.98 1041908.32 00:31:09.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005178.10 1000509.98 1041490.24 00:31:09.772 ======================================================== 00:31:09.772 Total : 256.00 0.12 1003802.97 1000215.98 1041908.32 00:31:09.772 00:31:10.033 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:10.033 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3992376 00:31:10.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3992376) - No such process 00:31:10.033 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3992376 00:31:10.033 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:10.033 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:10.033 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:10.033 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:10.033 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:10.033 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:10.033 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:10.033 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:10.033 rmmod nvme_tcp 00:31:10.033 rmmod nvme_fabrics 00:31:10.033 rmmod nvme_keyring 00:31:10.033 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:10.033 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:10.033 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:10.033 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3991406 ']' 00:31:10.033 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3991406 00:31:10.033 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3991406 ']' 00:31:10.033 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3991406 00:31:10.033 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:31:10.033 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:10.033 15:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3991406 00:31:10.293 15:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:10.293 15:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:10.293 15:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3991406' 00:31:10.293 killing process with pid 3991406 00:31:10.293 15:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3991406 00:31:10.293 15:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3991406 00:31:10.293 15:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:10.293 15:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:10.293 15:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:10.293 15:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:10.293 15:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:10.293 15:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:10.293 15:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:10.293 15:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:10.293 15:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:10.293 15:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.293 15:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.293 15:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:12.846 00:31:12.846 real 0m18.504s 00:31:12.846 user 0m26.872s 00:31:12.846 sys 0m7.425s 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:12.846 ************************************ 00:31:12.846 END TEST nvmf_delete_subsystem 00:31:12.846 ************************************ 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:12.846 ************************************ 00:31:12.846 START TEST nvmf_host_management 00:31:12.846 ************************************ 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:12.846 * Looking for test storage... 00:31:12.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:12.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.846 --rc genhtml_branch_coverage=1 00:31:12.846 --rc genhtml_function_coverage=1 00:31:12.846 --rc genhtml_legend=1 00:31:12.846 --rc geninfo_all_blocks=1 00:31:12.846 --rc geninfo_unexecuted_blocks=1 00:31:12.846 00:31:12.846 ' 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:12.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.846 --rc genhtml_branch_coverage=1 00:31:12.846 --rc genhtml_function_coverage=1 00:31:12.846 --rc genhtml_legend=1 00:31:12.846 --rc geninfo_all_blocks=1 00:31:12.846 --rc geninfo_unexecuted_blocks=1 00:31:12.846 00:31:12.846 ' 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:12.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.846 --rc genhtml_branch_coverage=1 00:31:12.846 --rc genhtml_function_coverage=1 00:31:12.846 --rc genhtml_legend=1 00:31:12.846 --rc geninfo_all_blocks=1 00:31:12.846 --rc geninfo_unexecuted_blocks=1 00:31:12.846 00:31:12.846 ' 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:12.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.846 --rc genhtml_branch_coverage=1 00:31:12.846 --rc genhtml_function_coverage=1 00:31:12.846 --rc genhtml_legend=1 00:31:12.846 --rc geninfo_all_blocks=1 00:31:12.846 --rc geninfo_unexecuted_blocks=1 00:31:12.846 00:31:12.846 ' 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:12.846 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:12.847 15:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:20.988 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.988 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:20.989 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:20.989 Found net devices under 0000:31:00.0: cvl_0_0 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:20.989 Found net devices under 0000:31:00.1: cvl_0_1 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:20.989 15:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:20.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:31:20.989 00:31:20.989 --- 10.0.0.2 ping statistics --- 00:31:20.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.989 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:31:20.989 00:31:20.989 --- 10.0.0.1 ping statistics --- 00:31:20.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.989 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3997134 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3997134 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3997134 ']' 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.989 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:20.990 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.990 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:20.990 15:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:20.990 [2024-11-06 15:43:38.245840] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:20.990 [2024-11-06 15:43:38.246992] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:31:20.990 [2024-11-06 15:43:38.247042] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.990 [2024-11-06 15:43:38.348667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:20.990 [2024-11-06 15:43:38.402448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.990 [2024-11-06 15:43:38.402499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.990 [2024-11-06 15:43:38.402508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:20.990 [2024-11-06 15:43:38.402516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:20.990 [2024-11-06 15:43:38.402522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.990 [2024-11-06 15:43:38.404590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:20.990 [2024-11-06 15:43:38.404769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:20.990 [2024-11-06 15:43:38.404912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.990 [2024-11-06 15:43:38.404912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:20.990 [2024-11-06 15:43:38.488153] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:20.990 [2024-11-06 15:43:38.488541] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:20.990 [2024-11-06 15:43:38.489474] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:20.990 [2024-11-06 15:43:38.489789] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:20.990 [2024-11-06 15:43:38.489839] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:21.251 [2024-11-06 15:43:39.113817] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:21.251 Malloc0 00:31:21.251 [2024-11-06 15:43:39.218118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:21.251 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:21.512 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3997505 00:31:21.512 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3997505 /var/tmp/bdevperf.sock 00:31:21.512 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3997505 ']' 00:31:21.512 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:21.512 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:21.512 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:21.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:21.512 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:21.512 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:21.512 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:21.512 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:21.512 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:21.512 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:21.512 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:21.512 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:21.512 { 00:31:21.512 "params": { 00:31:21.512 "name": "Nvme$subsystem", 00:31:21.512 "trtype": "$TEST_TRANSPORT", 00:31:21.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:21.512 "adrfam": "ipv4", 00:31:21.512 "trsvcid": "$NVMF_PORT", 00:31:21.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:21.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:21.512 "hdgst": ${hdgst:-false}, 00:31:21.512 "ddgst": ${ddgst:-false} 00:31:21.512 }, 00:31:21.512 "method": "bdev_nvme_attach_controller" 00:31:21.512 } 00:31:21.512 EOF 00:31:21.512 )") 00:31:21.512 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:21.512 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:21.512 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:21.512 15:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:21.512 "params": { 00:31:21.512 "name": "Nvme0", 00:31:21.512 "trtype": "tcp", 00:31:21.512 "traddr": "10.0.0.2", 00:31:21.512 "adrfam": "ipv4", 00:31:21.512 "trsvcid": "4420", 00:31:21.512 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:21.512 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:21.512 "hdgst": false, 00:31:21.512 "ddgst": false 00:31:21.512 }, 00:31:21.512 "method": "bdev_nvme_attach_controller" 00:31:21.512 }' 00:31:21.512 [2024-11-06 15:43:39.327328] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:31:21.512 [2024-11-06 15:43:39.327401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3997505 ] 00:31:21.512 [2024-11-06 15:43:39.421601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.512 [2024-11-06 15:43:39.474341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.772 Running I/O for 10 seconds... 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=800 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 800 -ge 100 ']' 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.346 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:22.346 [2024-11-06 15:43:40.233467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83aa80 is same with the state(6) to be set 00:31:22.346 [2024-11-06 15:43:40.233529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83aa80 is same with the state(6) to be set 00:31:22.346 [2024-11-06 15:43:40.233540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83aa80 is same with the state(6) to be set 00:31:22.346 [2024-11-06 15:43:40.233549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83aa80 is same with the state(6) to be set 00:31:22.346 [2024-11-06 15:43:40.233558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83aa80 is same with the state(6) to be set 00:31:22.346 [2024-11-06 15:43:40.233566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83aa80 is same with the state(6) to be set 00:31:22.346 [2024-11-06 15:43:40.233573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83aa80 is same with the state(6) to be set 00:31:22.346 [2024-11-06 15:43:40.233580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83aa80 is same with the state(6) to be set 00:31:22.346 [2024-11-06 15:43:40.233587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83aa80 is same with the state(6) to be set 00:31:22.346 [2024-11-06 15:43:40.233594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83aa80 is same with the state(6) to be set 00:31:22.346 [2024-11-06 15:43:40.233601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83aa80 is same with the state(6) to be set 00:31:22.346 [2024-11-06 15:43:40.233608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83aa80 is same with the state(6) to be set 00:31:22.346 [2024-11-06 15:43:40.233615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83aa80 is same with the state(6) to be set 00:31:22.346 [2024-11-06 15:43:40.234182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.346 [2024-11-06 15:43:40.234239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.346 [2024-11-06 15:43:40.234251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.347 [2024-11-06 15:43:40.234260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.234269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.347 [2024-11-06 15:43:40.234278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.234296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.347 [2024-11-06 15:43:40.234304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.234313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda280 is same with the state(6) to be set 00:31:22.347 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.347 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:22.347 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.347 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:22.347 [2024-11-06 15:43:40.241825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.241862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.241884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.241893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.241904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.241913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.241923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.241931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.241943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.241950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.241960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.241968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.241979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.241987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.241997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.347 [2024-11-06 15:43:40.242502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-11-06 15:43:40.242512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.242988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.242995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.243005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.243013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.243022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-11-06 15:43:40.243029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-11-06 15:43:40.244341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:22.348 task offset: 110848 on job bdev=Nvme0n1 fails 00:31:22.348 00:31:22.348 Latency(us) 00:31:22.348 [2024-11-06T14:43:40.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.348 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:22.348 Job: Nvme0n1 ended in about 0.56 seconds with error 00:31:22.348 Verification LBA range: start 0x0 length 0x400 00:31:22.348 Nvme0n1 : 0.56 1543.27 96.45 114.05 0.00 37647.87 1727.15 34297.17 00:31:22.348 [2024-11-06T14:43:40.331Z] =================================================================================================================== 00:31:22.348 [2024-11-06T14:43:40.331Z] Total : 1543.27 96.45 114.05 0.00 37647.87 1727.15 34297.17 00:31:22.348 [2024-11-06 15:43:40.246674] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:22.348 [2024-11-06 15:43:40.246712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbda280 (9): Bad file descriptor 00:31:22.348 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.348 15:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:22.610 [2024-11-06 15:43:40.340894] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:31:23.552 15:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3997505 00:31:23.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3997505) - No such process 00:31:23.552 15:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:23.553 15:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:23.553 15:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:23.553 15:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:23.553 15:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:23.553 15:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:23.553 15:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:23.553 15:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:23.553 { 00:31:23.553 "params": { 00:31:23.553 "name": "Nvme$subsystem", 00:31:23.553 "trtype": "$TEST_TRANSPORT", 00:31:23.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.553 "adrfam": "ipv4", 00:31:23.553 "trsvcid": "$NVMF_PORT", 00:31:23.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.553 "hdgst": ${hdgst:-false}, 00:31:23.553 "ddgst": ${ddgst:-false} 00:31:23.553 }, 00:31:23.553 "method": "bdev_nvme_attach_controller" 00:31:23.553 } 00:31:23.553 EOF 00:31:23.553 )") 00:31:23.553 15:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:23.553 15:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:23.553 15:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:23.553 15:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:23.553 "params": { 00:31:23.553 "name": "Nvme0", 00:31:23.553 "trtype": "tcp", 00:31:23.553 "traddr": "10.0.0.2", 00:31:23.553 "adrfam": "ipv4", 00:31:23.553 "trsvcid": "4420", 00:31:23.553 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:23.553 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:23.553 "hdgst": false, 00:31:23.553 "ddgst": false 00:31:23.553 }, 00:31:23.553 "method": "bdev_nvme_attach_controller" 00:31:23.553 }' 00:31:23.553 [2024-11-06 15:43:41.316205] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:31:23.553 [2024-11-06 15:43:41.316282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3997854 ] 00:31:23.553 [2024-11-06 15:43:41.415117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.553 [2024-11-06 15:43:41.467115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.814 Running I/O for 1 seconds... 00:31:25.015 2080.00 IOPS, 130.00 MiB/s 00:31:25.015 Latency(us) 00:31:25.015 [2024-11-06T14:43:42.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.015 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:25.015 Verification LBA range: start 0x0 length 0x400 00:31:25.015 Nvme0n1 : 1.06 2043.27 127.70 0.00 0.00 29553.07 1747.63 45438.29 00:31:25.015 [2024-11-06T14:43:42.998Z] =================================================================================================================== 00:31:25.015 [2024-11-06T14:43:42.998Z] Total : 2043.27 127.70 0.00 0.00 29553.07 1747.63 45438.29 00:31:25.015 15:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:25.015 15:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:25.015 15:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:25.015 15:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:25.015 15:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:25.015 15:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:25.015 15:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:25.015 15:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:25.015 15:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:25.015 15:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:25.015 15:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:25.015 rmmod nvme_tcp 00:31:25.015 rmmod nvme_fabrics 00:31:25.015 rmmod nvme_keyring 00:31:25.015 15:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:25.015 15:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:25.015 15:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:25.015 15:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3997134 ']' 00:31:25.015 15:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3997134 00:31:25.015 15:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 3997134 ']' 00:31:25.015 15:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 3997134 00:31:25.015 15:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:31:25.015 15:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:25.015 15:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3997134 00:31:25.277 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:25.277 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:25.277 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3997134' 00:31:25.277 killing process with pid 3997134 00:31:25.277 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 3997134 00:31:25.277 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 3997134 00:31:25.277 [2024-11-06 15:43:43.136871] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:25.277 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:25.277 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:25.277 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:25.277 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:25.277 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:25.277 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:25.277 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:25.277 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:25.277 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:25.277 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.277 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:25.277 15:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.281 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:27.281 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:27.281 00:31:27.281 real 0m14.932s 00:31:27.281 user 0m20.001s 00:31:27.281 sys 0m7.615s 00:31:27.281 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:27.281 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:27.281 ************************************ 00:31:27.281 END TEST nvmf_host_management 00:31:27.281 ************************************ 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:27.542 ************************************ 00:31:27.542 START TEST nvmf_lvol 00:31:27.542 ************************************ 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:27.542 * Looking for test storage... 00:31:27.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:27.542 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:27.543 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:27.543 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:27.543 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:27.543 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:27.543 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:27.543 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:27.543 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:27.543 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:27.543 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:27.543 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:27.543 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:27.543 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:27.543 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:27.543 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:27.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.543 --rc genhtml_branch_coverage=1 00:31:27.543 --rc genhtml_function_coverage=1 00:31:27.543 --rc genhtml_legend=1 00:31:27.543 --rc geninfo_all_blocks=1 00:31:27.543 --rc geninfo_unexecuted_blocks=1 00:31:27.543 00:31:27.543 ' 00:31:27.543 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:27.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.543 --rc genhtml_branch_coverage=1 00:31:27.543 --rc genhtml_function_coverage=1 00:31:27.543 --rc genhtml_legend=1 00:31:27.543 --rc geninfo_all_blocks=1 00:31:27.543 --rc geninfo_unexecuted_blocks=1 00:31:27.543 00:31:27.543 ' 00:31:27.543 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:27.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.543 --rc genhtml_branch_coverage=1 00:31:27.543 --rc genhtml_function_coverage=1 00:31:27.543 --rc genhtml_legend=1 00:31:27.543 --rc geninfo_all_blocks=1 00:31:27.543 --rc geninfo_unexecuted_blocks=1 00:31:27.543 00:31:27.543 ' 00:31:27.543 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:27.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.543 --rc genhtml_branch_coverage=1 00:31:27.543 --rc genhtml_function_coverage=1 00:31:27.543 --rc genhtml_legend=1 00:31:27.543 --rc geninfo_all_blocks=1 00:31:27.543 --rc geninfo_unexecuted_blocks=1 00:31:27.543 00:31:27.543 ' 00:31:27.543 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:27.803 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:27.803 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.803 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.803 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.803 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.803 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.803 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.803 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.803 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:27.803 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.803 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.803 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:27.803 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:27.803 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.803 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.803 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:27.804 15:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:35.938 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:35.938 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:35.938 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:35.938 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:35.939 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:35.939 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:35.939 Found net devices under 0000:31:00.0: cvl_0_0 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:35.939 Found net devices under 0000:31:00.1: cvl_0_1 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:35.939 15:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:35.939 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:35.939 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:35.939 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:35.939 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:35.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:35.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:31:35.940 00:31:35.940 --- 10.0.0.2 ping statistics --- 00:31:35.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.940 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:35.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:35.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:31:35.940 00:31:35.940 --- 10.0.0.1 ping statistics --- 00:31:35.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.940 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=4002275 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 4002275 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 4002275 ']' 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:35.940 15:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:35.940 [2024-11-06 15:43:53.226873] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:35.940 [2024-11-06 15:43:53.228021] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:31:35.940 [2024-11-06 15:43:53.228073] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.940 [2024-11-06 15:43:53.326844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:35.940 [2024-11-06 15:43:53.379700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.940 [2024-11-06 15:43:53.379762] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.940 [2024-11-06 15:43:53.379771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.940 [2024-11-06 15:43:53.379779] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.940 [2024-11-06 15:43:53.379786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.940 [2024-11-06 15:43:53.381636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.940 [2024-11-06 15:43:53.381813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:35.940 [2024-11-06 15:43:53.381837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.940 [2024-11-06 15:43:53.460733] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:35.940 [2024-11-06 15:43:53.461854] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:35.940 [2024-11-06 15:43:53.462307] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:35.940 [2024-11-06 15:43:53.462443] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:36.200 15:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:36.200 15:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:31:36.200 15:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:36.201 15:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:36.201 15:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:36.201 15:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:36.201 15:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:36.462 [2024-11-06 15:43:54.238855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.462 15:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:36.722 15:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:36.722 15:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:36.722 15:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:36.722 15:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:36.983 15:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:37.245 15:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b5f7f938-a799-4068-87a5-a929e1e94979 00:31:37.245 15:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b5f7f938-a799-4068-87a5-a929e1e94979 lvol 20 00:31:37.506 15:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2140507e-6c2a-44ee-8b8c-c76b56ab8f36 00:31:37.506 15:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:37.506 15:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2140507e-6c2a-44ee-8b8c-c76b56ab8f36 00:31:37.767 15:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:38.029 [2024-11-06 15:43:55.814768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:38.029 15:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:38.290 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4002929 00:31:38.290 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:38.290 15:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:39.232 15:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2140507e-6c2a-44ee-8b8c-c76b56ab8f36 MY_SNAPSHOT 00:31:39.494 15:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=fc5cda7c-7a8f-47c7-8f0e-f7ba8ee0f733 00:31:39.494 15:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2140507e-6c2a-44ee-8b8c-c76b56ab8f36 30 00:31:39.754 15:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone fc5cda7c-7a8f-47c7-8f0e-f7ba8ee0f733 MY_CLONE 00:31:40.015 15:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=82fe29e6-0fe8-49b6-9df5-e8ffeb59745f 00:31:40.015 15:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 82fe29e6-0fe8-49b6-9df5-e8ffeb59745f 00:31:40.584 15:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4002929 00:31:48.715 Initializing NVMe Controllers 00:31:48.715 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:48.715 Controller IO queue size 128, less than required. 00:31:48.715 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:48.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:48.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:48.715 Initialization complete. Launching workers. 00:31:48.715 ======================================================== 00:31:48.715 Latency(us) 00:31:48.715 Device Information : IOPS MiB/s Average min max 00:31:48.715 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15175.40 59.28 8436.10 1975.39 63279.05 00:31:48.715 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15237.20 59.52 8400.46 3906.85 68455.13 00:31:48.715 ======================================================== 00:31:48.715 Total : 30412.60 118.80 8418.24 1975.39 68455.13 00:31:48.715 00:31:48.715 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:48.715 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2140507e-6c2a-44ee-8b8c-c76b56ab8f36 00:31:48.975 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b5f7f938-a799-4068-87a5-a929e1e94979 00:31:48.975 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:48.975 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:48.975 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:48.975 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:48.975 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:48.975 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:48.975 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:48.975 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:48.975 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:48.975 rmmod nvme_tcp 00:31:48.975 rmmod nvme_fabrics 00:31:48.975 rmmod nvme_keyring 00:31:49.234 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:49.234 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:49.234 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:49.234 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 4002275 ']' 00:31:49.234 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 4002275 00:31:49.234 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 4002275 ']' 00:31:49.234 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 4002275 00:31:49.234 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:31:49.234 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:49.234 15:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4002275 00:31:49.234 15:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:49.234 15:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:49.234 15:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4002275' 00:31:49.234 killing process with pid 4002275 00:31:49.234 15:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 4002275 00:31:49.234 15:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 4002275 00:31:49.235 15:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:49.235 15:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:49.235 15:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:49.235 15:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:49.235 15:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:31:49.235 15:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:49.235 15:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:31:49.235 15:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:49.235 15:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:49.235 15:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.235 15:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.235 15:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:51.776 00:31:51.776 real 0m23.939s 00:31:51.776 user 0m55.742s 00:31:51.776 sys 0m10.809s 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:51.776 ************************************ 00:31:51.776 END TEST nvmf_lvol 00:31:51.776 ************************************ 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:51.776 ************************************ 00:31:51.776 START TEST nvmf_lvs_grow 00:31:51.776 ************************************ 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:51.776 * Looking for test storage... 00:31:51.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:51.776 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:51.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.777 --rc genhtml_branch_coverage=1 00:31:51.777 --rc genhtml_function_coverage=1 00:31:51.777 --rc genhtml_legend=1 00:31:51.777 --rc geninfo_all_blocks=1 00:31:51.777 --rc geninfo_unexecuted_blocks=1 00:31:51.777 00:31:51.777 ' 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:51.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.777 --rc genhtml_branch_coverage=1 00:31:51.777 --rc genhtml_function_coverage=1 00:31:51.777 --rc genhtml_legend=1 00:31:51.777 --rc geninfo_all_blocks=1 00:31:51.777 --rc geninfo_unexecuted_blocks=1 00:31:51.777 00:31:51.777 ' 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:51.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.777 --rc genhtml_branch_coverage=1 00:31:51.777 --rc genhtml_function_coverage=1 00:31:51.777 --rc genhtml_legend=1 00:31:51.777 --rc geninfo_all_blocks=1 00:31:51.777 --rc geninfo_unexecuted_blocks=1 00:31:51.777 00:31:51.777 ' 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:51.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.777 --rc genhtml_branch_coverage=1 00:31:51.777 --rc genhtml_function_coverage=1 00:31:51.777 --rc genhtml_legend=1 00:31:51.777 --rc geninfo_all_blocks=1 00:31:51.777 --rc geninfo_unexecuted_blocks=1 00:31:51.777 00:31:51.777 ' 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:51.777 15:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:59.917 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:59.917 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:59.917 Found net devices under 0000:31:00.0: cvl_0_0 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.917 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:59.918 Found net devices under 0000:31:00.1: cvl_0_1 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:59.918 15:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:59.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:31:59.918 00:31:59.918 --- 10.0.0.2 ping statistics --- 00:31:59.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.918 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:59.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:31:59.918 00:31:59.918 --- 10.0.0.1 ping statistics --- 00:31:59.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.918 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=4009198 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 4009198 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 4009198 ']' 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:59.918 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:59.918 [2024-11-06 15:44:17.280115] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:59.918 [2024-11-06 15:44:17.281261] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:31:59.918 [2024-11-06 15:44:17.281310] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.918 [2024-11-06 15:44:17.382944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.918 [2024-11-06 15:44:17.434605] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.918 [2024-11-06 15:44:17.434660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.918 [2024-11-06 15:44:17.434669] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:59.918 [2024-11-06 15:44:17.434676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:59.918 [2024-11-06 15:44:17.434683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.918 [2024-11-06 15:44:17.435487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.918 [2024-11-06 15:44:17.513344] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:59.918 [2024-11-06 15:44:17.513624] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:00.179 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:00.179 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:32:00.179 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:00.179 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:00.179 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:00.179 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.179 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:00.440 [2024-11-06 15:44:18.320392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.440 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:00.440 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:00.440 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:00.440 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:00.440 ************************************ 00:32:00.440 START TEST lvs_grow_clean 00:32:00.440 ************************************ 00:32:00.440 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:32:00.440 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:00.440 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:00.440 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:00.440 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:00.440 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:00.440 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:00.440 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:00.440 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:00.440 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:00.701 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:00.701 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:00.963 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=09d4ee3f-6b8b-4a3f-a06c-b5e8c71a968b 00:32:00.963 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09d4ee3f-6b8b-4a3f-a06c-b5e8c71a968b 00:32:00.963 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:01.224 15:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:01.224 15:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:01.224 15:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 09d4ee3f-6b8b-4a3f-a06c-b5e8c71a968b lvol 150 00:32:01.225 15:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7c7a7717-68fc-4ac0-9187-90f911d96086 00:32:01.225 15:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:01.225 15:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:01.535 [2024-11-06 15:44:19.372071] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:01.535 [2024-11-06 15:44:19.372243] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:01.535 true 00:32:01.535 15:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:01.535 15:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09d4ee3f-6b8b-4a3f-a06c-b5e8c71a968b 00:32:01.932 15:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:01.932 15:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:01.932 15:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7c7a7717-68fc-4ac0-9187-90f911d96086 00:32:02.201 15:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:02.201 [2024-11-06 15:44:20.108730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:02.201 15:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:02.461 15:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4009688 00:32:02.461 15:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:02.461 15:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:02.461 15:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4009688 /var/tmp/bdevperf.sock 00:32:02.461 15:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 4009688 ']' 00:32:02.461 15:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:02.461 15:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:02.461 15:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:02.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:02.461 15:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:02.461 15:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:02.461 [2024-11-06 15:44:20.347210] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:32:02.461 [2024-11-06 15:44:20.347281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4009688 ] 00:32:02.461 [2024-11-06 15:44:20.441103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.722 [2024-11-06 15:44:20.493054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:03.292 15:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:03.293 15:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:32:03.293 15:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:03.554 Nvme0n1 00:32:03.554 15:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:03.815 [ 00:32:03.815 { 00:32:03.815 "name": "Nvme0n1", 00:32:03.815 "aliases": [ 00:32:03.815 "7c7a7717-68fc-4ac0-9187-90f911d96086" 00:32:03.815 ], 00:32:03.815 "product_name": "NVMe disk", 00:32:03.815 "block_size": 4096, 00:32:03.815 "num_blocks": 38912, 00:32:03.815 "uuid": "7c7a7717-68fc-4ac0-9187-90f911d96086", 00:32:03.815 "numa_id": 0, 00:32:03.815 "assigned_rate_limits": { 00:32:03.815 "rw_ios_per_sec": 0, 00:32:03.815 "rw_mbytes_per_sec": 0, 00:32:03.815 "r_mbytes_per_sec": 0, 00:32:03.815 "w_mbytes_per_sec": 0 00:32:03.815 }, 00:32:03.815 "claimed": false, 00:32:03.815 "zoned": false, 00:32:03.815 "supported_io_types": { 00:32:03.815 "read": true, 00:32:03.815 "write": true, 00:32:03.815 "unmap": true, 00:32:03.815 "flush": true, 00:32:03.815 "reset": true, 00:32:03.815 "nvme_admin": true, 00:32:03.815 "nvme_io": true, 00:32:03.815 "nvme_io_md": false, 00:32:03.815 "write_zeroes": true, 00:32:03.815 "zcopy": false, 00:32:03.815 "get_zone_info": false, 00:32:03.815 "zone_management": false, 00:32:03.815 "zone_append": false, 00:32:03.815 "compare": true, 00:32:03.815 "compare_and_write": true, 00:32:03.815 "abort": true, 00:32:03.815 "seek_hole": false, 00:32:03.815 "seek_data": false, 00:32:03.815 "copy": true, 00:32:03.815 "nvme_iov_md": false 00:32:03.815 }, 00:32:03.815 "memory_domains": [ 00:32:03.815 { 00:32:03.815 "dma_device_id": "system", 00:32:03.815 "dma_device_type": 1 00:32:03.815 } 00:32:03.815 ], 00:32:03.815 "driver_specific": { 00:32:03.815 "nvme": [ 00:32:03.815 { 00:32:03.815 "trid": { 00:32:03.815 "trtype": "TCP", 00:32:03.815 "adrfam": "IPv4", 00:32:03.815 "traddr": "10.0.0.2", 00:32:03.815 "trsvcid": "4420", 00:32:03.815 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:03.815 }, 00:32:03.815 "ctrlr_data": { 00:32:03.815 "cntlid": 1, 00:32:03.815 "vendor_id": "0x8086", 00:32:03.815 "model_number": "SPDK bdev Controller", 00:32:03.815 "serial_number": "SPDK0", 00:32:03.815 "firmware_revision": "25.01", 00:32:03.815 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:03.815 "oacs": { 00:32:03.815 "security": 0, 00:32:03.815 "format": 0, 00:32:03.815 "firmware": 0, 00:32:03.815 "ns_manage": 0 00:32:03.815 }, 00:32:03.815 "multi_ctrlr": true, 00:32:03.815 "ana_reporting": false 00:32:03.815 }, 00:32:03.815 "vs": { 00:32:03.815 "nvme_version": "1.3" 00:32:03.815 }, 00:32:03.815 "ns_data": { 00:32:03.815 "id": 1, 00:32:03.815 "can_share": true 00:32:03.815 } 00:32:03.815 } 00:32:03.815 ], 00:32:03.815 "mp_policy": "active_passive" 00:32:03.815 } 00:32:03.815 } 00:32:03.815 ] 00:32:03.815 15:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4010020 00:32:03.815 15:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:03.815 15:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:03.815 Running I/O for 10 seconds... 00:32:05.198 Latency(us) 00:32:05.198 [2024-11-06T14:44:23.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.198 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:05.198 Nvme0n1 : 1.00 16266.00 63.54 0.00 0.00 0.00 0.00 0.00 00:32:05.198 [2024-11-06T14:44:23.181Z] =================================================================================================================== 00:32:05.198 [2024-11-06T14:44:23.181Z] Total : 16266.00 63.54 0.00 0.00 0.00 0.00 0.00 00:32:05.198 00:32:05.767 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 09d4ee3f-6b8b-4a3f-a06c-b5e8c71a968b 00:32:06.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:06.027 Nvme0n1 : 2.00 16515.00 64.51 0.00 0.00 0.00 0.00 0.00 00:32:06.027 [2024-11-06T14:44:24.010Z] =================================================================================================================== 00:32:06.027 [2024-11-06T14:44:24.010Z] Total : 16515.00 64.51 0.00 0.00 0.00 0.00 0.00 00:32:06.027 00:32:06.027 true 00:32:06.027 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09d4ee3f-6b8b-4a3f-a06c-b5e8c71a968b 00:32:06.027 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:06.287 15:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:06.287 15:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:06.287 15:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4010020 00:32:06.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:06.857 Nvme0n1 : 3.00 16725.00 65.33 0.00 0.00 0.00 0.00 0.00 00:32:06.857 [2024-11-06T14:44:24.840Z] =================================================================================================================== 00:32:06.857 [2024-11-06T14:44:24.840Z] Total : 16725.00 65.33 0.00 0.00 0.00 0.00 0.00 00:32:06.857 00:32:07.796 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:07.796 Nvme0n1 : 4.00 17338.00 67.73 0.00 0.00 0.00 0.00 0.00 00:32:07.796 [2024-11-06T14:44:25.779Z] =================================================================================================================== 00:32:07.796 [2024-11-06T14:44:25.779Z] Total : 17338.00 67.73 0.00 0.00 0.00 0.00 0.00 00:32:07.796 00:32:09.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:09.178 Nvme0n1 : 5.00 18798.00 73.43 0.00 0.00 0.00 0.00 0.00 00:32:09.178 [2024-11-06T14:44:27.161Z] =================================================================================================================== 00:32:09.178 [2024-11-06T14:44:27.161Z] Total : 18798.00 73.43 0.00 0.00 0.00 0.00 0.00 00:32:09.178 00:32:10.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:10.118 Nvme0n1 : 6.00 19792.50 77.31 0.00 0.00 0.00 0.00 0.00 00:32:10.118 [2024-11-06T14:44:28.101Z] =================================================================================================================== 00:32:10.118 [2024-11-06T14:44:28.101Z] Total : 19792.50 77.31 0.00 0.00 0.00 0.00 0.00 00:32:10.118 00:32:11.058 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:11.058 Nvme0n1 : 7.00 20502.86 80.09 0.00 0.00 0.00 0.00 0.00 00:32:11.058 [2024-11-06T14:44:29.041Z] =================================================================================================================== 00:32:11.058 [2024-11-06T14:44:29.041Z] Total : 20502.86 80.09 0.00 0.00 0.00 0.00 0.00 00:32:11.058 00:32:11.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:11.998 Nvme0n1 : 8.00 21035.62 82.17 0.00 0.00 0.00 0.00 0.00 00:32:11.998 [2024-11-06T14:44:29.981Z] =================================================================================================================== 00:32:11.998 [2024-11-06T14:44:29.981Z] Total : 21035.62 82.17 0.00 0.00 0.00 0.00 0.00 00:32:11.998 00:32:12.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:12.938 Nvme0n1 : 9.00 21443.00 83.76 0.00 0.00 0.00 0.00 0.00 00:32:12.938 [2024-11-06T14:44:30.921Z] =================================================================================================================== 00:32:12.938 [2024-11-06T14:44:30.921Z] Total : 21443.00 83.76 0.00 0.00 0.00 0.00 0.00 00:32:12.938 00:32:13.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:13.879 Nvme0n1 : 10.00 21773.70 85.05 0.00 0.00 0.00 0.00 0.00 00:32:13.879 [2024-11-06T14:44:31.862Z] =================================================================================================================== 00:32:13.879 [2024-11-06T14:44:31.862Z] Total : 21773.70 85.05 0.00 0.00 0.00 0.00 0.00 00:32:13.879 00:32:13.879 00:32:13.879 Latency(us) 00:32:13.879 [2024-11-06T14:44:31.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:13.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:13.879 Nvme0n1 : 10.00 21778.51 85.07 0.00 0.00 5874.17 2921.81 29491.20 00:32:13.879 [2024-11-06T14:44:31.862Z] =================================================================================================================== 00:32:13.879 [2024-11-06T14:44:31.862Z] Total : 21778.51 85.07 0.00 0.00 5874.17 2921.81 29491.20 00:32:13.879 { 00:32:13.879 "results": [ 00:32:13.879 { 00:32:13.879 "job": "Nvme0n1", 00:32:13.879 "core_mask": "0x2", 00:32:13.879 "workload": "randwrite", 00:32:13.879 "status": "finished", 00:32:13.879 "queue_depth": 128, 00:32:13.879 "io_size": 4096, 00:32:13.879 "runtime": 10.003671, 00:32:13.879 "iops": 21778.505110773836, 00:32:13.879 "mibps": 85.0722855889603, 00:32:13.879 "io_failed": 0, 00:32:13.879 "io_timeout": 0, 00:32:13.879 "avg_latency_us": 5874.165976912308, 00:32:13.879 "min_latency_us": 2921.8133333333335, 00:32:13.879 "max_latency_us": 29491.2 00:32:13.879 } 00:32:13.879 ], 00:32:13.879 "core_count": 1 00:32:13.879 } 00:32:13.879 15:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4009688 00:32:13.879 15:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 4009688 ']' 00:32:13.879 15:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 4009688 00:32:13.879 15:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:32:13.879 15:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:13.879 15:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4009688 00:32:14.140 15:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:14.140 15:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:14.140 15:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4009688' 00:32:14.140 killing process with pid 4009688 00:32:14.140 15:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 4009688 00:32:14.140 Received shutdown signal, test time was about 10.000000 seconds 00:32:14.140 00:32:14.140 Latency(us) 00:32:14.140 [2024-11-06T14:44:32.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.140 [2024-11-06T14:44:32.123Z] =================================================================================================================== 00:32:14.140 [2024-11-06T14:44:32.123Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:14.140 15:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 4009688 00:32:14.140 15:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:14.400 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:14.400 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09d4ee3f-6b8b-4a3f-a06c-b5e8c71a968b 00:32:14.400 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:14.660 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:14.660 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:14.660 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:14.922 [2024-11-06 15:44:32.660146] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:14.922 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09d4ee3f-6b8b-4a3f-a06c-b5e8c71a968b 00:32:14.922 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:32:14.922 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09d4ee3f-6b8b-4a3f-a06c-b5e8c71a968b 00:32:14.922 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:14.922 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:14.922 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:14.922 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:14.922 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:14.922 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:14.922 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:14.922 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:14.922 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09d4ee3f-6b8b-4a3f-a06c-b5e8c71a968b 00:32:14.922 request: 00:32:14.922 { 00:32:14.922 "uuid": "09d4ee3f-6b8b-4a3f-a06c-b5e8c71a968b", 00:32:14.922 "method": "bdev_lvol_get_lvstores", 00:32:14.922 "req_id": 1 00:32:14.922 } 00:32:14.922 Got JSON-RPC error response 00:32:14.922 response: 00:32:14.922 { 00:32:14.922 "code": -19, 00:32:14.922 "message": "No such device" 00:32:14.922 } 00:32:14.922 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:32:14.922 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:14.922 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:14.922 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:14.922 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:15.183 aio_bdev 00:32:15.183 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7c7a7717-68fc-4ac0-9187-90f911d96086 00:32:15.183 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=7c7a7717-68fc-4ac0-9187-90f911d96086 00:32:15.183 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:15.183 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:32:15.183 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:15.183 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:15.183 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:15.444 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7c7a7717-68fc-4ac0-9187-90f911d96086 -t 2000 00:32:15.444 [ 00:32:15.444 { 00:32:15.444 "name": "7c7a7717-68fc-4ac0-9187-90f911d96086", 00:32:15.444 "aliases": [ 00:32:15.444 "lvs/lvol" 00:32:15.444 ], 00:32:15.444 "product_name": "Logical Volume", 00:32:15.444 "block_size": 4096, 00:32:15.444 "num_blocks": 38912, 00:32:15.444 "uuid": "7c7a7717-68fc-4ac0-9187-90f911d96086", 00:32:15.444 "assigned_rate_limits": { 00:32:15.444 "rw_ios_per_sec": 0, 00:32:15.444 "rw_mbytes_per_sec": 0, 00:32:15.444 "r_mbytes_per_sec": 0, 00:32:15.444 "w_mbytes_per_sec": 0 00:32:15.444 }, 00:32:15.444 "claimed": false, 00:32:15.444 "zoned": false, 00:32:15.444 "supported_io_types": { 00:32:15.444 "read": true, 00:32:15.444 "write": true, 00:32:15.444 "unmap": true, 00:32:15.444 "flush": false, 00:32:15.444 "reset": true, 00:32:15.444 "nvme_admin": false, 00:32:15.444 "nvme_io": false, 00:32:15.444 "nvme_io_md": false, 00:32:15.444 "write_zeroes": true, 00:32:15.444 "zcopy": false, 00:32:15.444 "get_zone_info": false, 00:32:15.444 "zone_management": false, 00:32:15.444 "zone_append": false, 00:32:15.444 "compare": false, 00:32:15.444 "compare_and_write": false, 00:32:15.444 "abort": false, 00:32:15.444 "seek_hole": true, 00:32:15.444 "seek_data": true, 00:32:15.444 "copy": false, 00:32:15.444 "nvme_iov_md": false 00:32:15.444 }, 00:32:15.444 "driver_specific": { 00:32:15.444 "lvol": { 00:32:15.444 "lvol_store_uuid": "09d4ee3f-6b8b-4a3f-a06c-b5e8c71a968b", 00:32:15.444 "base_bdev": "aio_bdev", 00:32:15.444 "thin_provision": false, 00:32:15.444 "num_allocated_clusters": 38, 00:32:15.444 "snapshot": false, 00:32:15.444 "clone": false, 00:32:15.444 "esnap_clone": false 00:32:15.444 } 00:32:15.444 } 00:32:15.444 } 00:32:15.444 ] 00:32:15.444 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:32:15.444 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09d4ee3f-6b8b-4a3f-a06c-b5e8c71a968b 00:32:15.444 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:15.706 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:15.706 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09d4ee3f-6b8b-4a3f-a06c-b5e8c71a968b 00:32:15.706 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:15.968 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:15.968 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7c7a7717-68fc-4ac0-9187-90f911d96086 00:32:15.968 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 09d4ee3f-6b8b-4a3f-a06c-b5e8c71a968b 00:32:16.228 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:16.489 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:16.489 00:32:16.489 real 0m15.934s 00:32:16.489 user 0m15.575s 00:32:16.489 sys 0m1.470s 00:32:16.489 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:16.489 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:16.489 ************************************ 00:32:16.489 END TEST lvs_grow_clean 00:32:16.489 ************************************ 00:32:16.489 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:16.489 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:16.489 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:16.489 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:16.489 ************************************ 00:32:16.489 START TEST lvs_grow_dirty 00:32:16.489 ************************************ 00:32:16.489 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:32:16.489 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:16.489 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:16.489 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:16.489 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:16.489 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:16.489 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:16.489 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:16.489 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:16.489 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:16.750 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:16.751 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:17.012 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5eb3e895-6ffc-4387-bb57-b59b461a4ae5 00:32:17.012 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5eb3e895-6ffc-4387-bb57-b59b461a4ae5 00:32:17.012 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:17.012 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:17.012 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:17.012 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5eb3e895-6ffc-4387-bb57-b59b461a4ae5 lvol 150 00:32:17.273 15:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=207cac45-0072-46eb-abf0-41542a9d1fe5 00:32:17.273 15:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:17.273 15:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:17.534 [2024-11-06 15:44:35.336078] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:17.534 [2024-11-06 15:44:35.336250] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:17.534 true 00:32:17.534 15:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5eb3e895-6ffc-4387-bb57-b59b461a4ae5 00:32:17.534 15:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:17.795 15:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:17.795 15:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:17.795 15:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 207cac45-0072-46eb-abf0-41542a9d1fe5 00:32:18.056 15:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:18.056 [2024-11-06 15:44:36.028567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:18.317 15:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:18.317 15:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4012754 00:32:18.317 15:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:18.317 15:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:18.317 15:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4012754 /var/tmp/bdevperf.sock 00:32:18.317 15:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 4012754 ']' 00:32:18.317 15:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:18.317 15:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:18.317 15:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:18.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:18.317 15:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:18.317 15:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:18.317 [2024-11-06 15:44:36.267093] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:32:18.317 [2024-11-06 15:44:36.267149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012754 ] 00:32:18.577 [2024-11-06 15:44:36.352052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.577 [2024-11-06 15:44:36.383275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:19.147 15:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:19.147 15:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:32:19.147 15:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:19.718 Nvme0n1 00:32:19.718 15:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:19.718 [ 00:32:19.718 { 00:32:19.718 "name": "Nvme0n1", 00:32:19.718 "aliases": [ 00:32:19.718 "207cac45-0072-46eb-abf0-41542a9d1fe5" 00:32:19.718 ], 00:32:19.718 "product_name": "NVMe disk", 00:32:19.718 "block_size": 4096, 00:32:19.718 "num_blocks": 38912, 00:32:19.718 "uuid": "207cac45-0072-46eb-abf0-41542a9d1fe5", 00:32:19.718 "numa_id": 0, 00:32:19.718 "assigned_rate_limits": { 00:32:19.718 "rw_ios_per_sec": 0, 00:32:19.718 "rw_mbytes_per_sec": 0, 00:32:19.718 "r_mbytes_per_sec": 0, 00:32:19.718 "w_mbytes_per_sec": 0 00:32:19.718 }, 00:32:19.718 "claimed": false, 00:32:19.718 "zoned": false, 00:32:19.718 "supported_io_types": { 00:32:19.718 "read": true, 00:32:19.718 "write": true, 00:32:19.718 "unmap": true, 00:32:19.718 "flush": true, 00:32:19.718 "reset": true, 00:32:19.718 "nvme_admin": true, 00:32:19.718 "nvme_io": true, 00:32:19.718 "nvme_io_md": false, 00:32:19.718 "write_zeroes": true, 00:32:19.718 "zcopy": false, 00:32:19.718 "get_zone_info": false, 00:32:19.718 "zone_management": false, 00:32:19.718 "zone_append": false, 00:32:19.718 "compare": true, 00:32:19.718 "compare_and_write": true, 00:32:19.718 "abort": true, 00:32:19.718 "seek_hole": false, 00:32:19.718 "seek_data": false, 00:32:19.718 "copy": true, 00:32:19.718 "nvme_iov_md": false 00:32:19.718 }, 00:32:19.718 "memory_domains": [ 00:32:19.718 { 00:32:19.718 "dma_device_id": "system", 00:32:19.718 "dma_device_type": 1 00:32:19.718 } 00:32:19.718 ], 00:32:19.718 "driver_specific": { 00:32:19.718 "nvme": [ 00:32:19.718 { 00:32:19.718 "trid": { 00:32:19.718 "trtype": "TCP", 00:32:19.718 "adrfam": "IPv4", 00:32:19.718 "traddr": "10.0.0.2", 00:32:19.718 "trsvcid": "4420", 00:32:19.718 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:19.718 }, 00:32:19.718 "ctrlr_data": { 00:32:19.718 "cntlid": 1, 00:32:19.718 "vendor_id": "0x8086", 00:32:19.718 "model_number": "SPDK bdev Controller", 00:32:19.718 "serial_number": "SPDK0", 00:32:19.718 "firmware_revision": "25.01", 00:32:19.718 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:19.718 "oacs": { 00:32:19.718 "security": 0, 00:32:19.718 "format": 0, 00:32:19.718 "firmware": 0, 00:32:19.718 "ns_manage": 0 00:32:19.718 }, 00:32:19.718 "multi_ctrlr": true, 00:32:19.718 "ana_reporting": false 00:32:19.718 }, 00:32:19.718 "vs": { 00:32:19.718 "nvme_version": "1.3" 00:32:19.718 }, 00:32:19.718 "ns_data": { 00:32:19.718 "id": 1, 00:32:19.718 "can_share": true 00:32:19.718 } 00:32:19.718 } 00:32:19.718 ], 00:32:19.718 "mp_policy": "active_passive" 00:32:19.718 } 00:32:19.718 } 00:32:19.718 ] 00:32:19.718 15:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:19.718 15:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4013096 00:32:19.718 15:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:19.718 Running I/O for 10 seconds... 00:32:21.101 Latency(us) 00:32:21.101 [2024-11-06T14:44:39.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:21.101 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:21.101 Nvme0n1 : 1.00 16571.00 64.73 0.00 0.00 0.00 0.00 0.00 00:32:21.101 [2024-11-06T14:44:39.084Z] =================================================================================================================== 00:32:21.101 [2024-11-06T14:44:39.084Z] Total : 16571.00 64.73 0.00 0.00 0.00 0.00 0.00 00:32:21.101 00:32:21.671 15:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5eb3e895-6ffc-4387-bb57-b59b461a4ae5 00:32:21.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:21.930 Nvme0n1 : 2.00 16813.50 65.68 0.00 0.00 0.00 0.00 0.00 00:32:21.930 [2024-11-06T14:44:39.913Z] =================================================================================================================== 00:32:21.930 [2024-11-06T14:44:39.913Z] Total : 16813.50 65.68 0.00 0.00 0.00 0.00 0.00 00:32:21.930 00:32:21.930 true 00:32:21.930 15:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5eb3e895-6ffc-4387-bb57-b59b461a4ae5 00:32:21.930 15:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:22.190 15:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:22.190 15:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:22.190 15:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4013096 00:32:22.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:22.760 Nvme0n1 : 3.00 16835.67 65.76 0.00 0.00 0.00 0.00 0.00 00:32:22.760 [2024-11-06T14:44:40.743Z] =================================================================================================================== 00:32:22.760 [2024-11-06T14:44:40.743Z] Total : 16835.67 65.76 0.00 0.00 0.00 0.00 0.00 00:32:22.760 00:32:23.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:23.700 Nvme0n1 : 4.00 16910.75 66.06 0.00 0.00 0.00 0.00 0.00 00:32:23.700 [2024-11-06T14:44:41.683Z] =================================================================================================================== 00:32:23.700 [2024-11-06T14:44:41.683Z] Total : 16910.75 66.06 0.00 0.00 0.00 0.00 0.00 00:32:23.700 00:32:25.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:25.089 Nvme0n1 : 5.00 17327.00 67.68 0.00 0.00 0.00 0.00 0.00 00:32:25.089 [2024-11-06T14:44:43.072Z] =================================================================================================================== 00:32:25.089 [2024-11-06T14:44:43.072Z] Total : 17327.00 67.68 0.00 0.00 0.00 0.00 0.00 00:32:25.089 00:32:26.031 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:26.031 Nvme0n1 : 6.00 18457.83 72.10 0.00 0.00 0.00 0.00 0.00 00:32:26.031 [2024-11-06T14:44:44.014Z] =================================================================================================================== 00:32:26.031 [2024-11-06T14:44:44.014Z] Total : 18457.83 72.10 0.00 0.00 0.00 0.00 0.00 00:32:26.031 00:32:26.973 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:26.973 Nvme0n1 : 7.00 19270.14 75.27 0.00 0.00 0.00 0.00 0.00 00:32:26.973 [2024-11-06T14:44:44.956Z] =================================================================================================================== 00:32:26.973 [2024-11-06T14:44:44.956Z] Total : 19270.14 75.27 0.00 0.00 0.00 0.00 0.00 00:32:26.973 00:32:27.915 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:27.915 Nvme0n1 : 8.00 19883.38 77.67 0.00 0.00 0.00 0.00 0.00 00:32:27.915 [2024-11-06T14:44:45.898Z] =================================================================================================================== 00:32:27.915 [2024-11-06T14:44:45.898Z] Total : 19883.38 77.67 0.00 0.00 0.00 0.00 0.00 00:32:27.915 00:32:28.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:28.857 Nvme0n1 : 9.00 20358.56 79.53 0.00 0.00 0.00 0.00 0.00 00:32:28.857 [2024-11-06T14:44:46.840Z] =================================================================================================================== 00:32:28.857 [2024-11-06T14:44:46.840Z] Total : 20358.56 79.53 0.00 0.00 0.00 0.00 0.00 00:32:28.857 00:32:29.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:29.799 Nvme0n1 : 10.00 20735.50 81.00 0.00 0.00 0.00 0.00 0.00 00:32:29.799 [2024-11-06T14:44:47.782Z] =================================================================================================================== 00:32:29.799 [2024-11-06T14:44:47.782Z] Total : 20735.50 81.00 0.00 0.00 0.00 0.00 0.00 00:32:29.799 00:32:29.799 00:32:29.799 Latency(us) 00:32:29.799 [2024-11-06T14:44:47.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:29.799 Nvme0n1 : 10.01 20737.39 81.01 0.00 0.00 6168.68 3522.56 23920.64 00:32:29.799 [2024-11-06T14:44:47.782Z] =================================================================================================================== 00:32:29.799 [2024-11-06T14:44:47.782Z] Total : 20737.39 81.01 0.00 0.00 6168.68 3522.56 23920.64 00:32:29.799 { 00:32:29.799 "results": [ 00:32:29.799 { 00:32:29.799 "job": "Nvme0n1", 00:32:29.799 "core_mask": "0x2", 00:32:29.799 "workload": "randwrite", 00:32:29.799 "status": "finished", 00:32:29.799 "queue_depth": 128, 00:32:29.799 "io_size": 4096, 00:32:29.799 "runtime": 10.005262, 00:32:29.799 "iops": 20737.387986441536, 00:32:29.799 "mibps": 81.00542182203725, 00:32:29.799 "io_failed": 0, 00:32:29.799 "io_timeout": 0, 00:32:29.799 "avg_latency_us": 6168.6808661914465, 00:32:29.799 "min_latency_us": 3522.56, 00:32:29.799 "max_latency_us": 23920.64 00:32:29.799 } 00:32:29.799 ], 00:32:29.799 "core_count": 1 00:32:29.799 } 00:32:29.799 15:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4012754 00:32:29.799 15:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 4012754 ']' 00:32:29.799 15:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 4012754 00:32:29.799 15:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:32:29.799 15:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:29.799 15:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4012754 00:32:30.060 15:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:30.060 15:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:30.060 15:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4012754' 00:32:30.060 killing process with pid 4012754 00:32:30.060 15:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 4012754 00:32:30.060 Received shutdown signal, test time was about 10.000000 seconds 00:32:30.060 00:32:30.060 Latency(us) 00:32:30.060 [2024-11-06T14:44:48.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.060 [2024-11-06T14:44:48.043Z] =================================================================================================================== 00:32:30.060 [2024-11-06T14:44:48.043Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:30.060 15:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 4012754 00:32:30.060 15:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:30.320 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:30.320 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5eb3e895-6ffc-4387-bb57-b59b461a4ae5 00:32:30.320 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:30.582 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:30.582 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:30.582 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4009198 00:32:30.582 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4009198 00:32:30.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4009198 Killed "${NVMF_APP[@]}" "$@" 00:32:30.582 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:30.582 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:30.582 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:30.582 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:30.582 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:30.582 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=4015110 00:32:30.582 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 4015110 00:32:30.582 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:30.582 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 4015110 ']' 00:32:30.582 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.582 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:30.582 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.582 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:30.582 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:30.582 [2024-11-06 15:44:48.511445] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:30.582 [2024-11-06 15:44:48.512485] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:32:30.582 [2024-11-06 15:44:48.512533] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:30.842 [2024-11-06 15:44:48.607411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.842 [2024-11-06 15:44:48.638735] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:30.842 [2024-11-06 15:44:48.638771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:30.842 [2024-11-06 15:44:48.638777] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:30.842 [2024-11-06 15:44:48.638782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:30.842 [2024-11-06 15:44:48.638786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:30.842 [2024-11-06 15:44:48.639245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.842 [2024-11-06 15:44:48.691576] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:30.842 [2024-11-06 15:44:48.691766] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:31.413 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:31.413 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:32:31.413 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:31.413 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:31.413 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:31.413 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:31.413 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:31.674 [2024-11-06 15:44:49.505576] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:31.674 [2024-11-06 15:44:49.505842] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:31.674 [2024-11-06 15:44:49.505933] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:31.674 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:31.674 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 207cac45-0072-46eb-abf0-41542a9d1fe5 00:32:31.674 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=207cac45-0072-46eb-abf0-41542a9d1fe5 00:32:31.674 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:31.674 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:31.674 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:31.674 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:31.674 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:31.934 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 207cac45-0072-46eb-abf0-41542a9d1fe5 -t 2000 00:32:31.934 [ 00:32:31.934 { 00:32:31.934 "name": "207cac45-0072-46eb-abf0-41542a9d1fe5", 00:32:31.934 "aliases": [ 00:32:31.934 "lvs/lvol" 00:32:31.934 ], 00:32:31.934 "product_name": "Logical Volume", 00:32:31.934 "block_size": 4096, 00:32:31.934 "num_blocks": 38912, 00:32:31.934 "uuid": "207cac45-0072-46eb-abf0-41542a9d1fe5", 00:32:31.934 "assigned_rate_limits": { 00:32:31.934 "rw_ios_per_sec": 0, 00:32:31.934 "rw_mbytes_per_sec": 0, 00:32:31.934 "r_mbytes_per_sec": 0, 00:32:31.934 "w_mbytes_per_sec": 0 00:32:31.934 }, 00:32:31.934 "claimed": false, 00:32:31.934 "zoned": false, 00:32:31.934 "supported_io_types": { 00:32:31.934 "read": true, 00:32:31.934 "write": true, 00:32:31.934 "unmap": true, 00:32:31.934 "flush": false, 00:32:31.934 "reset": true, 00:32:31.934 "nvme_admin": false, 00:32:31.934 "nvme_io": false, 00:32:31.934 "nvme_io_md": false, 00:32:31.934 "write_zeroes": true, 00:32:31.934 "zcopy": false, 00:32:31.934 "get_zone_info": false, 00:32:31.934 "zone_management": false, 00:32:31.934 "zone_append": false, 00:32:31.934 "compare": false, 00:32:31.934 "compare_and_write": false, 00:32:31.934 "abort": false, 00:32:31.934 "seek_hole": true, 00:32:31.934 "seek_data": true, 00:32:31.934 "copy": false, 00:32:31.934 "nvme_iov_md": false 00:32:31.934 }, 00:32:31.934 "driver_specific": { 00:32:31.934 "lvol": { 00:32:31.934 "lvol_store_uuid": "5eb3e895-6ffc-4387-bb57-b59b461a4ae5", 00:32:31.934 "base_bdev": "aio_bdev", 00:32:31.934 "thin_provision": false, 00:32:31.934 "num_allocated_clusters": 38, 00:32:31.934 "snapshot": false, 00:32:31.934 "clone": false, 00:32:31.934 "esnap_clone": false 00:32:31.934 } 00:32:31.934 } 00:32:31.934 } 00:32:31.934 ] 00:32:31.934 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:31.934 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5eb3e895-6ffc-4387-bb57-b59b461a4ae5 00:32:31.934 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:32.194 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:32.194 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5eb3e895-6ffc-4387-bb57-b59b461a4ae5 00:32:32.194 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:32.454 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:32.454 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:32.454 [2024-11-06 15:44:50.419727] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:32.714 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5eb3e895-6ffc-4387-bb57-b59b461a4ae5 00:32:32.714 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:32:32.714 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5eb3e895-6ffc-4387-bb57-b59b461a4ae5 00:32:32.714 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:32.714 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.714 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:32.714 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.714 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:32.714 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.714 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:32.714 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:32.714 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5eb3e895-6ffc-4387-bb57-b59b461a4ae5 00:32:32.714 request: 00:32:32.714 { 00:32:32.714 "uuid": "5eb3e895-6ffc-4387-bb57-b59b461a4ae5", 00:32:32.714 "method": "bdev_lvol_get_lvstores", 00:32:32.714 "req_id": 1 00:32:32.714 } 00:32:32.714 Got JSON-RPC error response 00:32:32.714 response: 00:32:32.714 { 00:32:32.714 "code": -19, 00:32:32.714 "message": "No such device" 00:32:32.714 } 00:32:32.714 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:32:32.714 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:32.714 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:32.714 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:32.714 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:32.974 aio_bdev 00:32:32.974 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 207cac45-0072-46eb-abf0-41542a9d1fe5 00:32:32.974 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=207cac45-0072-46eb-abf0-41542a9d1fe5 00:32:32.974 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:32.974 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:32.974 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:32.974 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:32.974 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:33.234 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 207cac45-0072-46eb-abf0-41542a9d1fe5 -t 2000 00:32:33.234 [ 00:32:33.234 { 00:32:33.234 "name": "207cac45-0072-46eb-abf0-41542a9d1fe5", 00:32:33.234 "aliases": [ 00:32:33.234 "lvs/lvol" 00:32:33.234 ], 00:32:33.234 "product_name": "Logical Volume", 00:32:33.234 "block_size": 4096, 00:32:33.234 "num_blocks": 38912, 00:32:33.234 "uuid": "207cac45-0072-46eb-abf0-41542a9d1fe5", 00:32:33.234 "assigned_rate_limits": { 00:32:33.234 "rw_ios_per_sec": 0, 00:32:33.234 "rw_mbytes_per_sec": 0, 00:32:33.234 "r_mbytes_per_sec": 0, 00:32:33.234 "w_mbytes_per_sec": 0 00:32:33.234 }, 00:32:33.234 "claimed": false, 00:32:33.234 "zoned": false, 00:32:33.234 "supported_io_types": { 00:32:33.234 "read": true, 00:32:33.234 "write": true, 00:32:33.234 "unmap": true, 00:32:33.234 "flush": false, 00:32:33.234 "reset": true, 00:32:33.234 "nvme_admin": false, 00:32:33.234 "nvme_io": false, 00:32:33.234 "nvme_io_md": false, 00:32:33.234 "write_zeroes": true, 00:32:33.234 "zcopy": false, 00:32:33.234 "get_zone_info": false, 00:32:33.234 "zone_management": false, 00:32:33.234 "zone_append": false, 00:32:33.234 "compare": false, 00:32:33.234 "compare_and_write": false, 00:32:33.234 "abort": false, 00:32:33.234 "seek_hole": true, 00:32:33.234 "seek_data": true, 00:32:33.234 "copy": false, 00:32:33.234 "nvme_iov_md": false 00:32:33.234 }, 00:32:33.234 "driver_specific": { 00:32:33.234 "lvol": { 00:32:33.234 "lvol_store_uuid": "5eb3e895-6ffc-4387-bb57-b59b461a4ae5", 00:32:33.234 "base_bdev": "aio_bdev", 00:32:33.234 "thin_provision": false, 00:32:33.234 "num_allocated_clusters": 38, 00:32:33.234 "snapshot": false, 00:32:33.234 "clone": false, 00:32:33.234 "esnap_clone": false 00:32:33.234 } 00:32:33.234 } 00:32:33.234 } 00:32:33.234 ] 00:32:33.235 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:33.235 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5eb3e895-6ffc-4387-bb57-b59b461a4ae5 00:32:33.235 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:33.495 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:33.495 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5eb3e895-6ffc-4387-bb57-b59b461a4ae5 00:32:33.495 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:33.755 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:33.755 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 207cac45-0072-46eb-abf0-41542a9d1fe5 00:32:33.755 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5eb3e895-6ffc-4387-bb57-b59b461a4ae5 00:32:34.015 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:34.276 00:32:34.276 real 0m17.624s 00:32:34.276 user 0m35.453s 00:32:34.276 sys 0m3.184s 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:34.276 ************************************ 00:32:34.276 END TEST lvs_grow_dirty 00:32:34.276 ************************************ 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:34.276 nvmf_trace.0 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:34.276 rmmod nvme_tcp 00:32:34.276 rmmod nvme_fabrics 00:32:34.276 rmmod nvme_keyring 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 4015110 ']' 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 4015110 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 4015110 ']' 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 4015110 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:34.276 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4015110 00:32:34.536 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:34.536 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:34.536 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4015110' 00:32:34.536 killing process with pid 4015110 00:32:34.536 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 4015110 00:32:34.536 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 4015110 00:32:34.536 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:34.536 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:34.536 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:34.536 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:34.536 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:34.536 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:34.536 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:34.536 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:34.536 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:34.536 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.536 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:34.536 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:37.079 00:32:37.079 real 0m45.140s 00:32:37.079 user 0m54.107s 00:32:37.079 sys 0m10.880s 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:37.079 ************************************ 00:32:37.079 END TEST nvmf_lvs_grow 00:32:37.079 ************************************ 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:37.079 ************************************ 00:32:37.079 START TEST nvmf_bdev_io_wait 00:32:37.079 ************************************ 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:37.079 * Looking for test storage... 00:32:37.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:37.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.079 --rc genhtml_branch_coverage=1 00:32:37.079 --rc genhtml_function_coverage=1 00:32:37.079 --rc genhtml_legend=1 00:32:37.079 --rc geninfo_all_blocks=1 00:32:37.079 --rc geninfo_unexecuted_blocks=1 00:32:37.079 00:32:37.079 ' 00:32:37.079 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:37.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.079 --rc genhtml_branch_coverage=1 00:32:37.080 --rc genhtml_function_coverage=1 00:32:37.080 --rc genhtml_legend=1 00:32:37.080 --rc geninfo_all_blocks=1 00:32:37.080 --rc geninfo_unexecuted_blocks=1 00:32:37.080 00:32:37.080 ' 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:37.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.080 --rc genhtml_branch_coverage=1 00:32:37.080 --rc genhtml_function_coverage=1 00:32:37.080 --rc genhtml_legend=1 00:32:37.080 --rc geninfo_all_blocks=1 00:32:37.080 --rc geninfo_unexecuted_blocks=1 00:32:37.080 00:32:37.080 ' 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:37.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.080 --rc genhtml_branch_coverage=1 00:32:37.080 --rc genhtml_function_coverage=1 00:32:37.080 --rc genhtml_legend=1 00:32:37.080 --rc geninfo_all_blocks=1 00:32:37.080 --rc geninfo_unexecuted_blocks=1 00:32:37.080 00:32:37.080 ' 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:37.080 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:45.221 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:45.222 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:45.222 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:45.222 Found net devices under 0000:31:00.0: cvl_0_0 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:45.222 Found net devices under 0000:31:00.1: cvl_0_1 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:45.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:45.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:32:45.222 00:32:45.222 --- 10.0.0.2 ping statistics --- 00:32:45.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.222 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:45.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:45.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:32:45.222 00:32:45.222 --- 10.0.0.1 ping statistics --- 00:32:45.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.222 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.222 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=4020231 00:32:45.223 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 4020231 00:32:45.223 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:45.223 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 4020231 ']' 00:32:45.223 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.223 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:45.223 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.223 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:45.223 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.223 [2024-11-06 15:45:02.529459] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:45.223 [2024-11-06 15:45:02.530621] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:32:45.223 [2024-11-06 15:45:02.530675] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:45.223 [2024-11-06 15:45:02.635211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:45.223 [2024-11-06 15:45:02.690043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:45.223 [2024-11-06 15:45:02.690096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:45.223 [2024-11-06 15:45:02.690105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:45.223 [2024-11-06 15:45:02.690113] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:45.223 [2024-11-06 15:45:02.690119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:45.223 [2024-11-06 15:45:02.692165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.223 [2024-11-06 15:45:02.692326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:45.223 [2024-11-06 15:45:02.692491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.223 [2024-11-06 15:45:02.692491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:45.223 [2024-11-06 15:45:02.692860] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:45.483 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:45.483 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:32:45.483 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:45.483 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:45.483 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.483 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:45.483 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:45.483 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.483 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.483 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.483 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:45.483 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.483 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.483 [2024-11-06 15:45:03.453986] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:45.483 [2024-11-06 15:45:03.454515] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:45.483 [2024-11-06 15:45:03.454714] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:45.483 [2024-11-06 15:45:03.454858] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:45.483 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.483 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:45.483 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.483 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.744 [2024-11-06 15:45:03.465366] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.744 Malloc0 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.744 [2024-11-06 15:45:03.537478] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4020352 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4020354 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:45.744 { 00:32:45.744 "params": { 00:32:45.744 "name": "Nvme$subsystem", 00:32:45.744 "trtype": "$TEST_TRANSPORT", 00:32:45.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:45.744 "adrfam": "ipv4", 00:32:45.744 "trsvcid": "$NVMF_PORT", 00:32:45.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:45.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:45.744 "hdgst": ${hdgst:-false}, 00:32:45.744 "ddgst": ${ddgst:-false} 00:32:45.744 }, 00:32:45.744 "method": "bdev_nvme_attach_controller" 00:32:45.744 } 00:32:45.744 EOF 00:32:45.744 )") 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4020356 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:45.744 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4020359 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:45.745 { 00:32:45.745 "params": { 00:32:45.745 "name": "Nvme$subsystem", 00:32:45.745 "trtype": "$TEST_TRANSPORT", 00:32:45.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:45.745 "adrfam": "ipv4", 00:32:45.745 "trsvcid": "$NVMF_PORT", 00:32:45.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:45.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:45.745 "hdgst": ${hdgst:-false}, 00:32:45.745 "ddgst": ${ddgst:-false} 00:32:45.745 }, 00:32:45.745 "method": "bdev_nvme_attach_controller" 00:32:45.745 } 00:32:45.745 EOF 00:32:45.745 )") 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:45.745 { 00:32:45.745 "params": { 00:32:45.745 "name": "Nvme$subsystem", 00:32:45.745 "trtype": "$TEST_TRANSPORT", 00:32:45.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:45.745 "adrfam": "ipv4", 00:32:45.745 "trsvcid": "$NVMF_PORT", 00:32:45.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:45.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:45.745 "hdgst": ${hdgst:-false}, 00:32:45.745 "ddgst": ${ddgst:-false} 00:32:45.745 }, 00:32:45.745 "method": "bdev_nvme_attach_controller" 00:32:45.745 } 00:32:45.745 EOF 00:32:45.745 )") 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:45.745 { 00:32:45.745 "params": { 00:32:45.745 "name": "Nvme$subsystem", 00:32:45.745 "trtype": "$TEST_TRANSPORT", 00:32:45.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:45.745 "adrfam": "ipv4", 00:32:45.745 "trsvcid": "$NVMF_PORT", 00:32:45.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:45.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:45.745 "hdgst": ${hdgst:-false}, 00:32:45.745 "ddgst": ${ddgst:-false} 00:32:45.745 }, 00:32:45.745 "method": "bdev_nvme_attach_controller" 00:32:45.745 } 00:32:45.745 EOF 00:32:45.745 )") 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4020352 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:45.745 "params": { 00:32:45.745 "name": "Nvme1", 00:32:45.745 "trtype": "tcp", 00:32:45.745 "traddr": "10.0.0.2", 00:32:45.745 "adrfam": "ipv4", 00:32:45.745 "trsvcid": "4420", 00:32:45.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:45.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:45.745 "hdgst": false, 00:32:45.745 "ddgst": false 00:32:45.745 }, 00:32:45.745 "method": "bdev_nvme_attach_controller" 00:32:45.745 }' 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:45.745 "params": { 00:32:45.745 "name": "Nvme1", 00:32:45.745 "trtype": "tcp", 00:32:45.745 "traddr": "10.0.0.2", 00:32:45.745 "adrfam": "ipv4", 00:32:45.745 "trsvcid": "4420", 00:32:45.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:45.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:45.745 "hdgst": false, 00:32:45.745 "ddgst": false 00:32:45.745 }, 00:32:45.745 "method": "bdev_nvme_attach_controller" 00:32:45.745 }' 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:45.745 "params": { 00:32:45.745 "name": "Nvme1", 00:32:45.745 "trtype": "tcp", 00:32:45.745 "traddr": "10.0.0.2", 00:32:45.745 "adrfam": "ipv4", 00:32:45.745 "trsvcid": "4420", 00:32:45.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:45.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:45.745 "hdgst": false, 00:32:45.745 "ddgst": false 00:32:45.745 }, 00:32:45.745 "method": "bdev_nvme_attach_controller" 00:32:45.745 }' 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:45.745 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:45.745 "params": { 00:32:45.745 "name": "Nvme1", 00:32:45.745 "trtype": "tcp", 00:32:45.745 "traddr": "10.0.0.2", 00:32:45.745 "adrfam": "ipv4", 00:32:45.745 "trsvcid": "4420", 00:32:45.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:45.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:45.745 "hdgst": false, 00:32:45.745 "ddgst": false 00:32:45.745 }, 00:32:45.745 "method": "bdev_nvme_attach_controller" 00:32:45.745 }' 00:32:45.745 [2024-11-06 15:45:03.596341] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:32:45.745 [2024-11-06 15:45:03.596415] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:45.745 [2024-11-06 15:45:03.597819] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:32:45.745 [2024-11-06 15:45:03.597886] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:45.745 [2024-11-06 15:45:03.600609] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:32:45.745 [2024-11-06 15:45:03.600678] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:45.745 [2024-11-06 15:45:03.607030] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:32:45.745 [2024-11-06 15:45:03.607089] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:46.006 [2024-11-06 15:45:03.820094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.007 [2024-11-06 15:45:03.861664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:46.007 [2024-11-06 15:45:03.912551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.007 [2024-11-06 15:45:03.954320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:46.007 [2024-11-06 15:45:03.977710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.266 [2024-11-06 15:45:04.015631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:46.266 [2024-11-06 15:45:04.048963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.267 [2024-11-06 15:45:04.089321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:46.267 Running I/O for 1 seconds... 00:32:46.267 Running I/O for 1 seconds... 00:32:46.267 Running I/O for 1 seconds... 00:32:46.527 Running I/O for 1 seconds... 00:32:47.469 7738.00 IOPS, 30.23 MiB/s 00:32:47.469 Latency(us) 00:32:47.469 [2024-11-06T14:45:05.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.469 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:47.469 Nvme1n1 : 1.02 7733.62 30.21 0.00 0.00 16422.44 4942.51 25668.27 00:32:47.469 [2024-11-06T14:45:05.452Z] =================================================================================================================== 00:32:47.469 [2024-11-06T14:45:05.452Z] Total : 7733.62 30.21 0.00 0.00 16422.44 4942.51 25668.27 00:32:47.469 185696.00 IOPS, 725.38 MiB/s 00:32:47.469 Latency(us) 00:32:47.469 [2024-11-06T14:45:05.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.469 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:47.469 Nvme1n1 : 1.00 185322.79 723.92 0.00 0.00 686.91 314.03 1993.39 00:32:47.469 [2024-11-06T14:45:05.452Z] =================================================================================================================== 00:32:47.469 [2024-11-06T14:45:05.452Z] Total : 185322.79 723.92 0.00 0.00 686.91 314.03 1993.39 00:32:47.469 7224.00 IOPS, 28.22 MiB/s 00:32:47.469 Latency(us) 00:32:47.469 [2024-11-06T14:45:05.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.469 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:47.469 Nvme1n1 : 1.01 7307.50 28.54 0.00 0.00 17456.44 5024.43 26978.99 00:32:47.469 [2024-11-06T14:45:05.452Z] =================================================================================================================== 00:32:47.469 [2024-11-06T14:45:05.452Z] Total : 7307.50 28.54 0.00 0.00 17456.44 5024.43 26978.99 00:32:47.469 11844.00 IOPS, 46.27 MiB/s 00:32:47.469 Latency(us) 00:32:47.469 [2024-11-06T14:45:05.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.469 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:47.469 Nvme1n1 : 1.01 11916.38 46.55 0.00 0.00 10705.64 2184.53 16820.91 00:32:47.469 [2024-11-06T14:45:05.452Z] =================================================================================================================== 00:32:47.469 [2024-11-06T14:45:05.452Z] Total : 11916.38 46.55 0.00 0.00 10705.64 2184.53 16820.91 00:32:47.469 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4020354 00:32:47.469 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4020356 00:32:47.469 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4020359 00:32:47.469 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:47.469 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.469 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:47.469 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.469 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:47.469 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:47.469 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:47.730 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:47.730 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:47.730 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:47.730 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:47.730 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:47.730 rmmod nvme_tcp 00:32:47.730 rmmod nvme_fabrics 00:32:47.730 rmmod nvme_keyring 00:32:47.731 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:47.731 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:47.731 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:47.731 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 4020231 ']' 00:32:47.731 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 4020231 00:32:47.731 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 4020231 ']' 00:32:47.731 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 4020231 00:32:47.731 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:32:47.731 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:47.731 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4020231 00:32:47.731 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:47.731 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:47.731 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4020231' 00:32:47.731 killing process with pid 4020231 00:32:47.731 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 4020231 00:32:47.731 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 4020231 00:32:47.992 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:47.992 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:47.992 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:47.992 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:47.992 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:32:47.992 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:47.992 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:32:47.992 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:47.992 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:47.992 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.992 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:47.992 15:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.959 15:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:49.959 00:32:49.959 real 0m13.267s 00:32:49.959 user 0m15.892s 00:32:49.959 sys 0m7.829s 00:32:49.959 15:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:49.959 15:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:49.959 ************************************ 00:32:49.959 END TEST nvmf_bdev_io_wait 00:32:49.959 ************************************ 00:32:49.960 15:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:49.960 15:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:49.960 15:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:49.960 15:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:49.960 ************************************ 00:32:49.960 START TEST nvmf_queue_depth 00:32:49.960 ************************************ 00:32:49.960 15:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:50.257 * Looking for test storage... 00:32:50.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:50.257 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:50.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.258 --rc genhtml_branch_coverage=1 00:32:50.258 --rc genhtml_function_coverage=1 00:32:50.258 --rc genhtml_legend=1 00:32:50.258 --rc geninfo_all_blocks=1 00:32:50.258 --rc geninfo_unexecuted_blocks=1 00:32:50.258 00:32:50.258 ' 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:50.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.258 --rc genhtml_branch_coverage=1 00:32:50.258 --rc genhtml_function_coverage=1 00:32:50.258 --rc genhtml_legend=1 00:32:50.258 --rc geninfo_all_blocks=1 00:32:50.258 --rc geninfo_unexecuted_blocks=1 00:32:50.258 00:32:50.258 ' 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:50.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.258 --rc genhtml_branch_coverage=1 00:32:50.258 --rc genhtml_function_coverage=1 00:32:50.258 --rc genhtml_legend=1 00:32:50.258 --rc geninfo_all_blocks=1 00:32:50.258 --rc geninfo_unexecuted_blocks=1 00:32:50.258 00:32:50.258 ' 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:50.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.258 --rc genhtml_branch_coverage=1 00:32:50.258 --rc genhtml_function_coverage=1 00:32:50.258 --rc genhtml_legend=1 00:32:50.258 --rc geninfo_all_blocks=1 00:32:50.258 --rc geninfo_unexecuted_blocks=1 00:32:50.258 00:32:50.258 ' 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:50.258 15:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.401 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:58.401 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:58.401 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:58.401 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:58.401 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:58.401 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:58.401 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:58.401 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:58.401 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:58.401 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:58.401 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:58.401 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:58.401 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:58.401 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:58.401 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:58.401 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:58.402 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:58.402 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:58.402 Found net devices under 0000:31:00.0: cvl_0_0 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:58.402 Found net devices under 0000:31:00.1: cvl_0_1 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:58.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:58.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:32:58.402 00:32:58.402 --- 10.0.0.2 ping statistics --- 00:32:58.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:58.402 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:58.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:58.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:32:58.402 00:32:58.402 --- 10.0.0.1 ping statistics --- 00:32:58.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:58.402 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:58.402 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:58.403 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:58.403 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:58.403 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:58.403 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:58.403 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:58.403 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.403 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=4025537 00:32:58.403 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 4025537 00:32:58.403 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:58.403 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 4025537 ']' 00:32:58.403 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:58.403 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:58.403 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:58.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:58.403 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:58.403 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.403 [2024-11-06 15:45:15.853571] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:58.403 [2024-11-06 15:45:15.854723] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:32:58.403 [2024-11-06 15:45:15.854782] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:58.403 [2024-11-06 15:45:15.960225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.403 [2024-11-06 15:45:16.010398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:58.403 [2024-11-06 15:45:16.010445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:58.403 [2024-11-06 15:45:16.010454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:58.403 [2024-11-06 15:45:16.010461] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:58.403 [2024-11-06 15:45:16.010467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:58.403 [2024-11-06 15:45:16.011270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.403 [2024-11-06 15:45:16.088434] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:58.403 [2024-11-06 15:45:16.088714] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.977 [2024-11-06 15:45:16.712140] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.977 Malloc0 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.977 [2024-11-06 15:45:16.792179] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4025621 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:58.977 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4025621 /var/tmp/bdevperf.sock 00:32:58.978 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 4025621 ']' 00:32:58.978 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:58.978 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:58.978 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:58.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:58.978 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:58.978 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.978 [2024-11-06 15:45:16.848717] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:32:58.978 [2024-11-06 15:45:16.848785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4025621 ] 00:32:58.978 [2024-11-06 15:45:16.940300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.239 [2024-11-06 15:45:16.993289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.811 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:59.811 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:32:59.811 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:59.811 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.811 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:00.071 NVMe0n1 00:33:00.071 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.071 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:00.071 Running I/O for 10 seconds... 00:33:01.955 8327.00 IOPS, 32.53 MiB/s [2024-11-06T14:45:21.326Z] 8709.50 IOPS, 34.02 MiB/s [2024-11-06T14:45:22.266Z] 9219.67 IOPS, 36.01 MiB/s [2024-11-06T14:45:23.208Z] 10243.50 IOPS, 40.01 MiB/s [2024-11-06T14:45:24.150Z] 10867.60 IOPS, 42.45 MiB/s [2024-11-06T14:45:25.091Z] 11367.50 IOPS, 44.40 MiB/s [2024-11-06T14:45:26.032Z] 11697.43 IOPS, 45.69 MiB/s [2024-11-06T14:45:26.973Z] 11920.00 IOPS, 46.56 MiB/s [2024-11-06T14:45:28.358Z] 12112.33 IOPS, 47.31 MiB/s [2024-11-06T14:45:28.358Z] 12277.00 IOPS, 47.96 MiB/s 00:33:10.375 Latency(us) 00:33:10.375 [2024-11-06T14:45:28.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.375 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:10.375 Verification LBA range: start 0x0 length 0x4000 00:33:10.375 NVMe0n1 : 10.06 12304.02 48.06 0.00 0.00 82932.88 24903.68 74274.13 00:33:10.375 [2024-11-06T14:45:28.358Z] =================================================================================================================== 00:33:10.375 [2024-11-06T14:45:28.358Z] Total : 12304.02 48.06 0.00 0.00 82932.88 24903.68 74274.13 00:33:10.375 { 00:33:10.375 "results": [ 00:33:10.375 { 00:33:10.375 "job": "NVMe0n1", 00:33:10.375 "core_mask": "0x1", 00:33:10.375 "workload": "verify", 00:33:10.375 "status": "finished", 00:33:10.375 "verify_range": { 00:33:10.375 "start": 0, 00:33:10.375 "length": 16384 00:33:10.375 }, 00:33:10.375 "queue_depth": 1024, 00:33:10.375 "io_size": 4096, 00:33:10.375 "runtime": 10.061262, 00:33:10.375 "iops": 12304.023093723234, 00:33:10.375 "mibps": 48.06259020985638, 00:33:10.375 "io_failed": 0, 00:33:10.375 "io_timeout": 0, 00:33:10.375 "avg_latency_us": 82932.87585418786, 00:33:10.375 "min_latency_us": 24903.68, 00:33:10.375 "max_latency_us": 74274.13333333333 00:33:10.375 } 00:33:10.375 ], 00:33:10.375 "core_count": 1 00:33:10.375 } 00:33:10.375 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4025621 00:33:10.375 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 4025621 ']' 00:33:10.375 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 4025621 00:33:10.375 15:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:33:10.375 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4025621 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4025621' 00:33:10.376 killing process with pid 4025621 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 4025621 00:33:10.376 Received shutdown signal, test time was about 10.000000 seconds 00:33:10.376 00:33:10.376 Latency(us) 00:33:10.376 [2024-11-06T14:45:28.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.376 [2024-11-06T14:45:28.359Z] =================================================================================================================== 00:33:10.376 [2024-11-06T14:45:28.359Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 4025621 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:10.376 rmmod nvme_tcp 00:33:10.376 rmmod nvme_fabrics 00:33:10.376 rmmod nvme_keyring 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 4025537 ']' 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 4025537 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 4025537 ']' 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 4025537 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4025537 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4025537' 00:33:10.376 killing process with pid 4025537 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 4025537 00:33:10.376 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 4025537 00:33:10.637 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:10.637 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:10.637 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:10.637 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:10.637 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:10.637 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:10.637 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:10.637 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:10.637 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:10.637 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.637 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:10.637 15:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.551 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:12.551 00:33:12.551 real 0m22.576s 00:33:12.551 user 0m24.714s 00:33:12.551 sys 0m7.494s 00:33:12.551 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:12.551 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:12.551 ************************************ 00:33:12.551 END TEST nvmf_queue_depth 00:33:12.551 ************************************ 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:12.814 ************************************ 00:33:12.814 START TEST nvmf_target_multipath 00:33:12.814 ************************************ 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:12.814 * Looking for test storage... 00:33:12.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:12.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.814 --rc genhtml_branch_coverage=1 00:33:12.814 --rc genhtml_function_coverage=1 00:33:12.814 --rc genhtml_legend=1 00:33:12.814 --rc geninfo_all_blocks=1 00:33:12.814 --rc geninfo_unexecuted_blocks=1 00:33:12.814 00:33:12.814 ' 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:12.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.814 --rc genhtml_branch_coverage=1 00:33:12.814 --rc genhtml_function_coverage=1 00:33:12.814 --rc genhtml_legend=1 00:33:12.814 --rc geninfo_all_blocks=1 00:33:12.814 --rc geninfo_unexecuted_blocks=1 00:33:12.814 00:33:12.814 ' 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:12.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.814 --rc genhtml_branch_coverage=1 00:33:12.814 --rc genhtml_function_coverage=1 00:33:12.814 --rc genhtml_legend=1 00:33:12.814 --rc geninfo_all_blocks=1 00:33:12.814 --rc geninfo_unexecuted_blocks=1 00:33:12.814 00:33:12.814 ' 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:12.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.814 --rc genhtml_branch_coverage=1 00:33:12.814 --rc genhtml_function_coverage=1 00:33:12.814 --rc genhtml_legend=1 00:33:12.814 --rc geninfo_all_blocks=1 00:33:12.814 --rc geninfo_unexecuted_blocks=1 00:33:12.814 00:33:12.814 ' 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:12.814 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:13.077 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:21.223 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:21.223 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:21.223 Found net devices under 0000:31:00.0: cvl_0_0 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:21.223 Found net devices under 0000:31:00.1: cvl_0_1 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:21.223 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:21.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:21.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:33:21.224 00:33:21.224 --- 10.0.0.2 ping statistics --- 00:33:21.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.224 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:21.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:21.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:33:21.224 00:33:21.224 --- 10.0.0.1 ping statistics --- 00:33:21.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.224 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:21.224 only one NIC for nvmf test 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:21.224 rmmod nvme_tcp 00:33:21.224 rmmod nvme_fabrics 00:33:21.224 rmmod nvme_keyring 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:21.224 15:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.612 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:22.874 00:33:22.874 real 0m10.025s 00:33:22.874 user 0m2.193s 00:33:22.874 sys 0m5.786s 00:33:22.874 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:22.874 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:22.874 ************************************ 00:33:22.874 END TEST nvmf_target_multipath 00:33:22.874 ************************************ 00:33:22.874 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:22.874 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:22.874 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:22.874 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:22.874 ************************************ 00:33:22.874 START TEST nvmf_zcopy 00:33:22.874 ************************************ 00:33:22.874 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:22.874 * Looking for test storage... 00:33:22.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:22.874 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:22.874 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:33:22.874 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:23.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.137 --rc genhtml_branch_coverage=1 00:33:23.137 --rc genhtml_function_coverage=1 00:33:23.137 --rc genhtml_legend=1 00:33:23.137 --rc geninfo_all_blocks=1 00:33:23.137 --rc geninfo_unexecuted_blocks=1 00:33:23.137 00:33:23.137 ' 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:23.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.137 --rc genhtml_branch_coverage=1 00:33:23.137 --rc genhtml_function_coverage=1 00:33:23.137 --rc genhtml_legend=1 00:33:23.137 --rc geninfo_all_blocks=1 00:33:23.137 --rc geninfo_unexecuted_blocks=1 00:33:23.137 00:33:23.137 ' 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:23.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.137 --rc genhtml_branch_coverage=1 00:33:23.137 --rc genhtml_function_coverage=1 00:33:23.137 --rc genhtml_legend=1 00:33:23.137 --rc geninfo_all_blocks=1 00:33:23.137 --rc geninfo_unexecuted_blocks=1 00:33:23.137 00:33:23.137 ' 00:33:23.137 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:23.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.137 --rc genhtml_branch_coverage=1 00:33:23.137 --rc genhtml_function_coverage=1 00:33:23.137 --rc genhtml_legend=1 00:33:23.137 --rc geninfo_all_blocks=1 00:33:23.137 --rc geninfo_unexecuted_blocks=1 00:33:23.137 00:33:23.137 ' 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:23.138 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:31.284 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:31.285 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:31.285 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:31.285 Found net devices under 0000:31:00.0: cvl_0_0 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:31.285 Found net devices under 0000:31:00.1: cvl_0_1 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:31.285 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:31.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:31.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:33:31.286 00:33:31.286 --- 10.0.0.2 ping statistics --- 00:33:31.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.286 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:31.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:31.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:33:31.286 00:33:31.286 --- 10.0.0.1 ping statistics --- 00:33:31.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.286 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=4036209 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 4036209 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 4036209 ']' 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:31.286 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:31.286 [2024-11-06 15:45:48.538794] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:31.286 [2024-11-06 15:45:48.539999] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:33:31.286 [2024-11-06 15:45:48.540054] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:31.286 [2024-11-06 15:45:48.638178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.286 [2024-11-06 15:45:48.688011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:31.286 [2024-11-06 15:45:48.688059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:31.286 [2024-11-06 15:45:48.688068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:31.286 [2024-11-06 15:45:48.688075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:31.286 [2024-11-06 15:45:48.688082] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:31.286 [2024-11-06 15:45:48.688852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.286 [2024-11-06 15:45:48.766503] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:31.286 [2024-11-06 15:45:48.766805] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:31.548 [2024-11-06 15:45:49.405689] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:31.548 [2024-11-06 15:45:49.433991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:31.548 malloc0 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:31.548 { 00:33:31.548 "params": { 00:33:31.548 "name": "Nvme$subsystem", 00:33:31.548 "trtype": "$TEST_TRANSPORT", 00:33:31.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:31.548 "adrfam": "ipv4", 00:33:31.548 "trsvcid": "$NVMF_PORT", 00:33:31.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:31.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:31.548 "hdgst": ${hdgst:-false}, 00:33:31.548 "ddgst": ${ddgst:-false} 00:33:31.548 }, 00:33:31.548 "method": "bdev_nvme_attach_controller" 00:33:31.548 } 00:33:31.548 EOF 00:33:31.548 )") 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:31.548 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:31.548 "params": { 00:33:31.548 "name": "Nvme1", 00:33:31.548 "trtype": "tcp", 00:33:31.548 "traddr": "10.0.0.2", 00:33:31.548 "adrfam": "ipv4", 00:33:31.548 "trsvcid": "4420", 00:33:31.548 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:31.548 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:31.548 "hdgst": false, 00:33:31.548 "ddgst": false 00:33:31.548 }, 00:33:31.548 "method": "bdev_nvme_attach_controller" 00:33:31.548 }' 00:33:31.815 [2024-11-06 15:45:49.546213] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:33:31.815 [2024-11-06 15:45:49.546279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4036308 ] 00:33:31.815 [2024-11-06 15:45:49.640446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.815 [2024-11-06 15:45:49.693879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.388 Running I/O for 10 seconds... 00:33:34.275 6311.00 IOPS, 49.30 MiB/s [2024-11-06T14:45:53.200Z] 6350.00 IOPS, 49.61 MiB/s [2024-11-06T14:45:54.143Z] 6362.33 IOPS, 49.71 MiB/s [2024-11-06T14:45:55.085Z] 6368.75 IOPS, 49.76 MiB/s [2024-11-06T14:45:56.470Z] 6527.60 IOPS, 51.00 MiB/s [2024-11-06T14:45:57.412Z] 7016.17 IOPS, 54.81 MiB/s [2024-11-06T14:45:58.354Z] 7375.00 IOPS, 57.62 MiB/s [2024-11-06T14:45:59.295Z] 7640.50 IOPS, 59.69 MiB/s [2024-11-06T14:46:00.235Z] 7848.11 IOPS, 61.31 MiB/s [2024-11-06T14:46:00.235Z] 8013.70 IOPS, 62.61 MiB/s 00:33:42.252 Latency(us) 00:33:42.252 [2024-11-06T14:46:00.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.252 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:42.252 Verification LBA range: start 0x0 length 0x1000 00:33:42.252 Nvme1n1 : 10.05 7984.94 62.38 0.00 0.00 15927.48 2553.17 44782.93 00:33:42.252 [2024-11-06T14:46:00.235Z] =================================================================================================================== 00:33:42.252 [2024-11-06T14:46:00.235Z] Total : 7984.94 62.38 0.00 0.00 15927.48 2553.17 44782.93 00:33:42.513 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4038314 00:33:42.513 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:42.513 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:42.513 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:42.513 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:42.513 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:42.513 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:42.513 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:42.513 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:42.513 { 00:33:42.513 "params": { 00:33:42.513 "name": "Nvme$subsystem", 00:33:42.513 "trtype": "$TEST_TRANSPORT", 00:33:42.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:42.513 "adrfam": "ipv4", 00:33:42.513 "trsvcid": "$NVMF_PORT", 00:33:42.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:42.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:42.513 "hdgst": ${hdgst:-false}, 00:33:42.513 "ddgst": ${ddgst:-false} 00:33:42.513 }, 00:33:42.513 "method": "bdev_nvme_attach_controller" 00:33:42.513 } 00:33:42.513 EOF 00:33:42.513 )") 00:33:42.513 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:42.514 [2024-11-06 15:46:00.249256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.514 [2024-11-06 15:46:00.249284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.514 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:42.514 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:42.514 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:42.514 "params": { 00:33:42.514 "name": "Nvme1", 00:33:42.514 "trtype": "tcp", 00:33:42.514 "traddr": "10.0.0.2", 00:33:42.514 "adrfam": "ipv4", 00:33:42.514 "trsvcid": "4420", 00:33:42.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:42.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:42.514 "hdgst": false, 00:33:42.514 "ddgst": false 00:33:42.514 }, 00:33:42.514 "method": "bdev_nvme_attach_controller" 00:33:42.514 }' 00:33:42.514 [2024-11-06 15:46:00.261227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.514 [2024-11-06 15:46:00.261237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.514 [2024-11-06 15:46:00.273225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.514 [2024-11-06 15:46:00.273232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.514 [2024-11-06 15:46:00.285224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.514 [2024-11-06 15:46:00.285232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.514 [2024-11-06 15:46:00.291140] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:33:42.514 [2024-11-06 15:46:00.291187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4038314 ] 00:33:42.514 [2024-11-06 15:46:00.297224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.514 [2024-11-06 15:46:00.297232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.514 [2024-11-06 15:46:00.309225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.514 [2024-11-06 15:46:00.309233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.514 [2024-11-06 15:46:00.321225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.514 [2024-11-06 15:46:00.321232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.514 [2024-11-06 15:46:00.333224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.514 [2024-11-06 15:46:00.333232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.514 [2024-11-06 15:46:00.345225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.514 [2024-11-06 15:46:00.345232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.514 [2024-11-06 15:46:00.357225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.514 [2024-11-06 15:46:00.357233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.514 [2024-11-06 15:46:00.369225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.514 [2024-11-06 15:46:00.369233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.514 [2024-11-06 15:46:00.373180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.514 [2024-11-06 15:46:00.381225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.514 [2024-11-06 15:46:00.381234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.514 [2024-11-06 15:46:00.393227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.514 [2024-11-06 15:46:00.393235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.514 [2024-11-06 15:46:00.403242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.514 [2024-11-06 15:46:00.405225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.514 [2024-11-06 15:46:00.405235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.514 [2024-11-06 15:46:00.417232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.514 [2024-11-06 15:46:00.417247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.514 [2024-11-06 15:46:00.429230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.514 [2024-11-06 15:46:00.429241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.514 [2024-11-06 15:46:00.441226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.514 [2024-11-06 15:46:00.441237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.514 [2024-11-06 15:46:00.453228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.514 [2024-11-06 15:46:00.453238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.514 [2024-11-06 15:46:00.465225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.514 [2024-11-06 15:46:00.465233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.514 [2024-11-06 15:46:00.477233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.514 [2024-11-06 15:46:00.477250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.514 [2024-11-06 15:46:00.489227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.514 [2024-11-06 15:46:00.489236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.775 [2024-11-06 15:46:00.501227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.775 [2024-11-06 15:46:00.501238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.775 [2024-11-06 15:46:00.513227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.775 [2024-11-06 15:46:00.513238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.775 [2024-11-06 15:46:00.566246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.775 [2024-11-06 15:46:00.566260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.775 [2024-11-06 15:46:00.577227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.775 [2024-11-06 15:46:00.577239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.775 Running I/O for 5 seconds... 00:33:42.775 [2024-11-06 15:46:00.592176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.775 [2024-11-06 15:46:00.592192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.775 [2024-11-06 15:46:00.605176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.775 [2024-11-06 15:46:00.605193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.775 [2024-11-06 15:46:00.618510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.775 [2024-11-06 15:46:00.618526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.775 [2024-11-06 15:46:00.632929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.775 [2024-11-06 15:46:00.632945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.775 [2024-11-06 15:46:00.646396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.775 [2024-11-06 15:46:00.646411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.775 [2024-11-06 15:46:00.660545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.775 [2024-11-06 15:46:00.660560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.775 [2024-11-06 15:46:00.673902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.775 [2024-11-06 15:46:00.673917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.775 [2024-11-06 15:46:00.688663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.775 [2024-11-06 15:46:00.688678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.775 [2024-11-06 15:46:00.701994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.775 [2024-11-06 15:46:00.702013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.775 [2024-11-06 15:46:00.716169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.775 [2024-11-06 15:46:00.716184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.775 [2024-11-06 15:46:00.729283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.775 [2024-11-06 15:46:00.729297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.775 [2024-11-06 15:46:00.741956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.775 [2024-11-06 15:46:00.741971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.775 [2024-11-06 15:46:00.756264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.775 [2024-11-06 15:46:00.756280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.036 [2024-11-06 15:46:00.769168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.036 [2024-11-06 15:46:00.769184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.036 [2024-11-06 15:46:00.781941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.036 [2024-11-06 15:46:00.781956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.036 [2024-11-06 15:46:00.796120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.036 [2024-11-06 15:46:00.796135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.036 [2024-11-06 15:46:00.809337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.036 [2024-11-06 15:46:00.809353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.036 [2024-11-06 15:46:00.822049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.036 [2024-11-06 15:46:00.822064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.036 [2024-11-06 15:46:00.836575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.036 [2024-11-06 15:46:00.836589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.036 [2024-11-06 15:46:00.849820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.036 [2024-11-06 15:46:00.849834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.036 [2024-11-06 15:46:00.864565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.036 [2024-11-06 15:46:00.864580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.036 [2024-11-06 15:46:00.877775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.036 [2024-11-06 15:46:00.877790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.036 [2024-11-06 15:46:00.892241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.036 [2024-11-06 15:46:00.892256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.036 [2024-11-06 15:46:00.905018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.036 [2024-11-06 15:46:00.905034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.036 [2024-11-06 15:46:00.918439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.036 [2024-11-06 15:46:00.918454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.036 [2024-11-06 15:46:00.932626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.036 [2024-11-06 15:46:00.932641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.037 [2024-11-06 15:46:00.945921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.037 [2024-11-06 15:46:00.945936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.037 [2024-11-06 15:46:00.960230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.037 [2024-11-06 15:46:00.960249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.037 [2024-11-06 15:46:00.973278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.037 [2024-11-06 15:46:00.973293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.037 [2024-11-06 15:46:00.986273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.037 [2024-11-06 15:46:00.986288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.037 [2024-11-06 15:46:01.000726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.037 [2024-11-06 15:46:01.000741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.037 [2024-11-06 15:46:01.013810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.037 [2024-11-06 15:46:01.013824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.298 [2024-11-06 15:46:01.028566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.298 [2024-11-06 15:46:01.028581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.298 [2024-11-06 15:46:01.041673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.298 [2024-11-06 15:46:01.041688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.298 [2024-11-06 15:46:01.056419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.298 [2024-11-06 15:46:01.056434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.298 [2024-11-06 15:46:01.069568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.298 [2024-11-06 15:46:01.069582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.298 [2024-11-06 15:46:01.084608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.298 [2024-11-06 15:46:01.084622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.298 [2024-11-06 15:46:01.097763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.298 [2024-11-06 15:46:01.097777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.298 [2024-11-06 15:46:01.112401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.298 [2024-11-06 15:46:01.112416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.298 [2024-11-06 15:46:01.125487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.298 [2024-11-06 15:46:01.125501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.298 [2024-11-06 15:46:01.139923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.298 [2024-11-06 15:46:01.139938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.298 [2024-11-06 15:46:01.153047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.298 [2024-11-06 15:46:01.153061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.298 [2024-11-06 15:46:01.165771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.298 [2024-11-06 15:46:01.165785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.298 [2024-11-06 15:46:01.180440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.298 [2024-11-06 15:46:01.180455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.298 [2024-11-06 15:46:01.193609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.298 [2024-11-06 15:46:01.193623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.298 [2024-11-06 15:46:01.208660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.298 [2024-11-06 15:46:01.208675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.298 [2024-11-06 15:46:01.222175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.298 [2024-11-06 15:46:01.222190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.298 [2024-11-06 15:46:01.236717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.298 [2024-11-06 15:46:01.236731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.298 [2024-11-06 15:46:01.250143] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.298 [2024-11-06 15:46:01.250157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.298 [2024-11-06 15:46:01.264498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.298 [2024-11-06 15:46:01.264513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.298 [2024-11-06 15:46:01.277866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.298 [2024-11-06 15:46:01.277880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.559 [2024-11-06 15:46:01.292291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.559 [2024-11-06 15:46:01.292306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.559 [2024-11-06 15:46:01.305612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.559 [2024-11-06 15:46:01.305626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.559 [2024-11-06 15:46:01.320541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.559 [2024-11-06 15:46:01.320555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.559 [2024-11-06 15:46:01.333808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.559 [2024-11-06 15:46:01.333822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.559 [2024-11-06 15:46:01.348230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.559 [2024-11-06 15:46:01.348245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.559 [2024-11-06 15:46:01.361507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.559 [2024-11-06 15:46:01.361521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.559 [2024-11-06 15:46:01.376135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.559 [2024-11-06 15:46:01.376149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.559 [2024-11-06 15:46:01.389199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.559 [2024-11-06 15:46:01.389213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.559 [2024-11-06 15:46:01.402792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.559 [2024-11-06 15:46:01.402806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.559 [2024-11-06 15:46:01.416823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.559 [2024-11-06 15:46:01.416838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.559 [2024-11-06 15:46:01.430000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.559 [2024-11-06 15:46:01.430014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.559 [2024-11-06 15:46:01.444397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.559 [2024-11-06 15:46:01.444411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.559 [2024-11-06 15:46:01.457739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.559 [2024-11-06 15:46:01.457762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.559 [2024-11-06 15:46:01.472477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.559 [2024-11-06 15:46:01.472491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.559 [2024-11-06 15:46:01.485785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.559 [2024-11-06 15:46:01.485799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.559 [2024-11-06 15:46:01.501085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.559 [2024-11-06 15:46:01.501100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.559 [2024-11-06 15:46:01.514774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.559 [2024-11-06 15:46:01.514788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.559 [2024-11-06 15:46:01.529065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.559 [2024-11-06 15:46:01.529079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.819 [2024-11-06 15:46:01.542045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.819 [2024-11-06 15:46:01.542059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.819 [2024-11-06 15:46:01.556377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.820 [2024-11-06 15:46:01.556391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.820 [2024-11-06 15:46:01.569578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.820 [2024-11-06 15:46:01.569591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.820 18831.00 IOPS, 147.12 MiB/s [2024-11-06T14:46:01.803Z] [2024-11-06 15:46:01.584329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.820 [2024-11-06 15:46:01.584343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.820 [2024-11-06 15:46:01.597154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.820 [2024-11-06 15:46:01.597168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.820 [2024-11-06 15:46:01.609859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.820 [2024-11-06 15:46:01.609872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.820 [2024-11-06 15:46:01.624073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.820 [2024-11-06 15:46:01.624087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.820 [2024-11-06 15:46:01.637170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.820 [2024-11-06 15:46:01.637185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.820 [2024-11-06 15:46:01.650331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.820 [2024-11-06 15:46:01.650346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.820 [2024-11-06 15:46:01.664197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.820 [2024-11-06 15:46:01.664212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.820 [2024-11-06 15:46:01.677304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.820 [2024-11-06 15:46:01.677318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.820 [2024-11-06 15:46:01.690377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.820 [2024-11-06 15:46:01.690392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.820 [2024-11-06 15:46:01.704384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.820 [2024-11-06 15:46:01.704398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.820 [2024-11-06 15:46:01.717546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.820 [2024-11-06 15:46:01.717560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.820 [2024-11-06 15:46:01.732194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.820 [2024-11-06 15:46:01.732213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.820 [2024-11-06 15:46:01.745386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.820 [2024-11-06 15:46:01.745401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.820 [2024-11-06 15:46:01.758408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.820 [2024-11-06 15:46:01.758422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.820 [2024-11-06 15:46:01.772259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.820 [2024-11-06 15:46:01.772274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.820 [2024-11-06 15:46:01.785665] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.820 [2024-11-06 15:46:01.785679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.820 [2024-11-06 15:46:01.800539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.820 [2024-11-06 15:46:01.800553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.080 [2024-11-06 15:46:01.813889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.080 [2024-11-06 15:46:01.813903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.080 [2024-11-06 15:46:01.828180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.080 [2024-11-06 15:46:01.828195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.080 [2024-11-06 15:46:01.841322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.080 [2024-11-06 15:46:01.841336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.080 [2024-11-06 15:46:01.854420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.080 [2024-11-06 15:46:01.854435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.080 [2024-11-06 15:46:01.868495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.080 [2024-11-06 15:46:01.868509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.080 [2024-11-06 15:46:01.881161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.080 [2024-11-06 15:46:01.881177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.080 [2024-11-06 15:46:01.894210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.080 [2024-11-06 15:46:01.894225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.080 [2024-11-06 15:46:01.908805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.080 [2024-11-06 15:46:01.908819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.080 [2024-11-06 15:46:01.921965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.081 [2024-11-06 15:46:01.921979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.081 [2024-11-06 15:46:01.936344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.081 [2024-11-06 15:46:01.936358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.081 [2024-11-06 15:46:01.949420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.081 [2024-11-06 15:46:01.949434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.081 [2024-11-06 15:46:01.962401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.081 [2024-11-06 15:46:01.962415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.081 [2024-11-06 15:46:01.977207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.081 [2024-11-06 15:46:01.977221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.081 [2024-11-06 15:46:01.990560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.081 [2024-11-06 15:46:01.990578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.081 [2024-11-06 15:46:02.004707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.081 [2024-11-06 15:46:02.004721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.081 [2024-11-06 15:46:02.017848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.081 [2024-11-06 15:46:02.017862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.081 [2024-11-06 15:46:02.032399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.081 [2024-11-06 15:46:02.032414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.081 [2024-11-06 15:46:02.045285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.081 [2024-11-06 15:46:02.045299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.081 [2024-11-06 15:46:02.058537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.081 [2024-11-06 15:46:02.058552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.341 [2024-11-06 15:46:02.072426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.341 [2024-11-06 15:46:02.072440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.341 [2024-11-06 15:46:02.085570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.341 [2024-11-06 15:46:02.085584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.341 [2024-11-06 15:46:02.100467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.341 [2024-11-06 15:46:02.100482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.341 [2024-11-06 15:46:02.113405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.341 [2024-11-06 15:46:02.113420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.341 [2024-11-06 15:46:02.126436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.341 [2024-11-06 15:46:02.126450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.341 [2024-11-06 15:46:02.140593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.341 [2024-11-06 15:46:02.140608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.341 [2024-11-06 15:46:02.153987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.341 [2024-11-06 15:46:02.154001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.341 [2024-11-06 15:46:02.168529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.341 [2024-11-06 15:46:02.168544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.341 [2024-11-06 15:46:02.181460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.341 [2024-11-06 15:46:02.181475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.341 [2024-11-06 15:46:02.194301] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.341 [2024-11-06 15:46:02.194315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.341 [2024-11-06 15:46:02.208528] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.341 [2024-11-06 15:46:02.208543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.341 [2024-11-06 15:46:02.221580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.341 [2024-11-06 15:46:02.221593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.341 [2024-11-06 15:46:02.236481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.341 [2024-11-06 15:46:02.236496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.341 [2024-11-06 15:46:02.249581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.341 [2024-11-06 15:46:02.249599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.341 [2024-11-06 15:46:02.264586] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.341 [2024-11-06 15:46:02.264600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.341 [2024-11-06 15:46:02.277814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.341 [2024-11-06 15:46:02.277829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.341 [2024-11-06 15:46:02.292717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.341 [2024-11-06 15:46:02.292732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.341 [2024-11-06 15:46:02.305886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.341 [2024-11-06 15:46:02.305902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.341 [2024-11-06 15:46:02.320473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.341 [2024-11-06 15:46:02.320488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.602 [2024-11-06 15:46:02.333721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.602 [2024-11-06 15:46:02.333736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.602 [2024-11-06 15:46:02.348491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.602 [2024-11-06 15:46:02.348505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.602 [2024-11-06 15:46:02.361418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.602 [2024-11-06 15:46:02.361433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.602 [2024-11-06 15:46:02.374204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.602 [2024-11-06 15:46:02.374218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.602 [2024-11-06 15:46:02.388358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.602 [2024-11-06 15:46:02.388372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.602 [2024-11-06 15:46:02.401474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.602 [2024-11-06 15:46:02.401489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.602 [2024-11-06 15:46:02.414479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.602 [2024-11-06 15:46:02.414493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.602 [2024-11-06 15:46:02.428504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.602 [2024-11-06 15:46:02.428519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.602 [2024-11-06 15:46:02.441748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.602 [2024-11-06 15:46:02.441762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.602 [2024-11-06 15:46:02.456383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.602 [2024-11-06 15:46:02.456399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.602 [2024-11-06 15:46:02.469557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.602 [2024-11-06 15:46:02.469572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.602 [2024-11-06 15:46:02.485067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.602 [2024-11-06 15:46:02.485082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.602 [2024-11-06 15:46:02.498227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.602 [2024-11-06 15:46:02.498241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.602 [2024-11-06 15:46:02.512005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.603 [2024-11-06 15:46:02.512024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.603 [2024-11-06 15:46:02.524890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.603 [2024-11-06 15:46:02.524905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.603 [2024-11-06 15:46:02.538115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.603 [2024-11-06 15:46:02.538130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.603 [2024-11-06 15:46:02.552439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.603 [2024-11-06 15:46:02.552453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.603 [2024-11-06 15:46:02.565642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.603 [2024-11-06 15:46:02.565657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.603 [2024-11-06 15:46:02.580719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.603 [2024-11-06 15:46:02.580734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.864 18880.00 IOPS, 147.50 MiB/s [2024-11-06T14:46:02.847Z] [2024-11-06 15:46:02.593995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.864 [2024-11-06 15:46:02.594009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.864 [2024-11-06 15:46:02.609126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.864 [2024-11-06 15:46:02.609141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.864 [2024-11-06 15:46:02.622502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.864 [2024-11-06 15:46:02.622517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.864 [2024-11-06 15:46:02.636239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.864 [2024-11-06 15:46:02.636254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.864 [2024-11-06 15:46:02.649606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.864 [2024-11-06 15:46:02.649620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.864 [2024-11-06 15:46:02.664358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.864 [2024-11-06 15:46:02.664373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.864 [2024-11-06 15:46:02.677630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.864 [2024-11-06 15:46:02.677645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.864 [2024-11-06 15:46:02.692442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.864 [2024-11-06 15:46:02.692457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.864 [2024-11-06 15:46:02.705442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.864 [2024-11-06 15:46:02.705457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.864 [2024-11-06 15:46:02.718709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.864 [2024-11-06 15:46:02.718723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.864 [2024-11-06 15:46:02.733073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.864 [2024-11-06 15:46:02.733088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.864 [2024-11-06 15:46:02.746476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.864 [2024-11-06 15:46:02.746490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.864 [2024-11-06 15:46:02.760756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.864 [2024-11-06 15:46:02.760771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.864 [2024-11-06 15:46:02.773809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.864 [2024-11-06 15:46:02.773823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.864 [2024-11-06 15:46:02.788765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.864 [2024-11-06 15:46:02.788780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.864 [2024-11-06 15:46:02.802148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.864 [2024-11-06 15:46:02.802163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.864 [2024-11-06 15:46:02.816278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.864 [2024-11-06 15:46:02.816293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.864 [2024-11-06 15:46:02.829628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.864 [2024-11-06 15:46:02.829643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.864 [2024-11-06 15:46:02.844189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.865 [2024-11-06 15:46:02.844204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.125 [2024-11-06 15:46:02.857684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.125 [2024-11-06 15:46:02.857699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.125 [2024-11-06 15:46:02.872708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.125 [2024-11-06 15:46:02.872723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.125 [2024-11-06 15:46:02.885817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.125 [2024-11-06 15:46:02.885831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.125 [2024-11-06 15:46:02.901109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.125 [2024-11-06 15:46:02.901124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.125 [2024-11-06 15:46:02.914604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.125 [2024-11-06 15:46:02.914618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.125 [2024-11-06 15:46:02.929022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.125 [2024-11-06 15:46:02.929039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.125 [2024-11-06 15:46:02.941963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.125 [2024-11-06 15:46:02.941977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.125 [2024-11-06 15:46:02.956126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.125 [2024-11-06 15:46:02.956141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.125 [2024-11-06 15:46:02.969289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.125 [2024-11-06 15:46:02.969304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.125 [2024-11-06 15:46:02.982308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.125 [2024-11-06 15:46:02.982323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.125 [2024-11-06 15:46:02.995762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.125 [2024-11-06 15:46:02.995777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.125 [2024-11-06 15:46:03.008808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.125 [2024-11-06 15:46:03.008823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.125 [2024-11-06 15:46:03.021660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.125 [2024-11-06 15:46:03.021674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.125 [2024-11-06 15:46:03.036093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.125 [2024-11-06 15:46:03.036108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.125 [2024-11-06 15:46:03.049152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.125 [2024-11-06 15:46:03.049167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.125 [2024-11-06 15:46:03.061988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.125 [2024-11-06 15:46:03.062002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.125 [2024-11-06 15:46:03.076281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.125 [2024-11-06 15:46:03.076295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.125 [2024-11-06 15:46:03.089607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.125 [2024-11-06 15:46:03.089622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.125 [2024-11-06 15:46:03.104193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.125 [2024-11-06 15:46:03.104208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.386 [2024-11-06 15:46:03.117241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.386 [2024-11-06 15:46:03.117256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.386 [2024-11-06 15:46:03.130084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.386 [2024-11-06 15:46:03.130098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.386 [2024-11-06 15:46:03.144591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.386 [2024-11-06 15:46:03.144605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.386 [2024-11-06 15:46:03.157707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.386 [2024-11-06 15:46:03.157721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.386 [2024-11-06 15:46:03.172085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.386 [2024-11-06 15:46:03.172101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.386 [2024-11-06 15:46:03.185205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.386 [2024-11-06 15:46:03.185219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.386 [2024-11-06 15:46:03.198783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.386 [2024-11-06 15:46:03.198798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.386 [2024-11-06 15:46:03.212505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.386 [2024-11-06 15:46:03.212519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.386 [2024-11-06 15:46:03.225535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.386 [2024-11-06 15:46:03.225550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.386 [2024-11-06 15:46:03.241031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.386 [2024-11-06 15:46:03.241046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.386 [2024-11-06 15:46:03.253958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.386 [2024-11-06 15:46:03.253972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.386 [2024-11-06 15:46:03.268113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.386 [2024-11-06 15:46:03.268127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.386 [2024-11-06 15:46:03.281241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.386 [2024-11-06 15:46:03.281256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.386 [2024-11-06 15:46:03.294023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.386 [2024-11-06 15:46:03.294038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.386 [2024-11-06 15:46:03.308498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.386 [2024-11-06 15:46:03.308512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.386 [2024-11-06 15:46:03.321347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.386 [2024-11-06 15:46:03.321362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.386 [2024-11-06 15:46:03.334088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.386 [2024-11-06 15:46:03.334102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.386 [2024-11-06 15:46:03.348306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.386 [2024-11-06 15:46:03.348321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.386 [2024-11-06 15:46:03.361031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.386 [2024-11-06 15:46:03.361046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.647 [2024-11-06 15:46:03.374085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.647 [2024-11-06 15:46:03.374099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.647 [2024-11-06 15:46:03.388264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.647 [2024-11-06 15:46:03.388279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.647 [2024-11-06 15:46:03.401293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.647 [2024-11-06 15:46:03.401307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.647 [2024-11-06 15:46:03.414456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.647 [2024-11-06 15:46:03.414471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.647 [2024-11-06 15:46:03.428788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.647 [2024-11-06 15:46:03.428802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.647 [2024-11-06 15:46:03.441839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.647 [2024-11-06 15:46:03.441853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.647 [2024-11-06 15:46:03.456662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.647 [2024-11-06 15:46:03.456676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.647 [2024-11-06 15:46:03.469975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.647 [2024-11-06 15:46:03.469989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.647 [2024-11-06 15:46:03.484683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.647 [2024-11-06 15:46:03.484698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.647 [2024-11-06 15:46:03.497977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.647 [2024-11-06 15:46:03.497992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.647 [2024-11-06 15:46:03.512925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.647 [2024-11-06 15:46:03.512939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.647 [2024-11-06 15:46:03.525961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.647 [2024-11-06 15:46:03.525975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.647 [2024-11-06 15:46:03.540901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.647 [2024-11-06 15:46:03.540920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.647 [2024-11-06 15:46:03.554162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.647 [2024-11-06 15:46:03.554176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.647 [2024-11-06 15:46:03.568055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.647 [2024-11-06 15:46:03.568070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.647 [2024-11-06 15:46:03.581081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.647 [2024-11-06 15:46:03.581095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.647 18904.67 IOPS, 147.69 MiB/s [2024-11-06T14:46:03.630Z] [2024-11-06 15:46:03.594893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.647 [2024-11-06 15:46:03.594907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.647 [2024-11-06 15:46:03.608580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.647 [2024-11-06 15:46:03.608595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.647 [2024-11-06 15:46:03.621844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.647 [2024-11-06 15:46:03.621858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.908 [2024-11-06 15:46:03.636317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.908 [2024-11-06 15:46:03.636332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.908 [2024-11-06 15:46:03.649546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.908 [2024-11-06 15:46:03.649560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.908 [2024-11-06 15:46:03.664601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.908 [2024-11-06 15:46:03.664616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.908 [2024-11-06 15:46:03.677925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.908 [2024-11-06 15:46:03.677939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.908 [2024-11-06 15:46:03.692986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.908 [2024-11-06 15:46:03.693001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.908 [2024-11-06 15:46:03.706269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.908 [2024-11-06 15:46:03.706283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.908 [2024-11-06 15:46:03.720501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.908 [2024-11-06 15:46:03.720516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.908 [2024-11-06 15:46:03.733940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.908 [2024-11-06 15:46:03.733954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.908 [2024-11-06 15:46:03.748875] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.908 [2024-11-06 15:46:03.748889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.908 [2024-11-06 15:46:03.762218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.908 [2024-11-06 15:46:03.762232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.908 [2024-11-06 15:46:03.776309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.908 [2024-11-06 15:46:03.776323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.908 [2024-11-06 15:46:03.789689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.908 [2024-11-06 15:46:03.789703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.908 [2024-11-06 15:46:03.803892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.908 [2024-11-06 15:46:03.803911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.908 [2024-11-06 15:46:03.817280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.908 [2024-11-06 15:46:03.817295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.908 [2024-11-06 15:46:03.830527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.908 [2024-11-06 15:46:03.830541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.908 [2024-11-06 15:46:03.844783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.908 [2024-11-06 15:46:03.844798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.908 [2024-11-06 15:46:03.857898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.908 [2024-11-06 15:46:03.857912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.908 [2024-11-06 15:46:03.872453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.908 [2024-11-06 15:46:03.872467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.908 [2024-11-06 15:46:03.885296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.908 [2024-11-06 15:46:03.885310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.169 [2024-11-06 15:46:03.898390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.169 [2024-11-06 15:46:03.898405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.169 [2024-11-06 15:46:03.912977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.169 [2024-11-06 15:46:03.912992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.169 [2024-11-06 15:46:03.926247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.169 [2024-11-06 15:46:03.926261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.169 [2024-11-06 15:46:03.940200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.169 [2024-11-06 15:46:03.940215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.169 [2024-11-06 15:46:03.953070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.169 [2024-11-06 15:46:03.953085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.169 [2024-11-06 15:46:03.966077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.169 [2024-11-06 15:46:03.966091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.169 [2024-11-06 15:46:03.980424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.169 [2024-11-06 15:46:03.980439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.169 [2024-11-06 15:46:03.993720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.169 [2024-11-06 15:46:03.993735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.169 [2024-11-06 15:46:04.008200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.169 [2024-11-06 15:46:04.008215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.169 [2024-11-06 15:46:04.021426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.169 [2024-11-06 15:46:04.021441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.169 [2024-11-06 15:46:04.034351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.169 [2024-11-06 15:46:04.034365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.169 [2024-11-06 15:46:04.048935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.169 [2024-11-06 15:46:04.048950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.169 [2024-11-06 15:46:04.062360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.169 [2024-11-06 15:46:04.062379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.169 [2024-11-06 15:46:04.076256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.169 [2024-11-06 15:46:04.076271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.169 [2024-11-06 15:46:04.089565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.169 [2024-11-06 15:46:04.089579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.169 [2024-11-06 15:46:04.104077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.169 [2024-11-06 15:46:04.104092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.169 [2024-11-06 15:46:04.117347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.169 [2024-11-06 15:46:04.117361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.169 [2024-11-06 15:46:04.129951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.169 [2024-11-06 15:46:04.129965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.169 [2024-11-06 15:46:04.144353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.169 [2024-11-06 15:46:04.144368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.431 [2024-11-06 15:46:04.157531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.431 [2024-11-06 15:46:04.157545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.431 [2024-11-06 15:46:04.172134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.431 [2024-11-06 15:46:04.172149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.431 [2024-11-06 15:46:04.185553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.431 [2024-11-06 15:46:04.185567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.431 [2024-11-06 15:46:04.200378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.431 [2024-11-06 15:46:04.200393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.431 [2024-11-06 15:46:04.213972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.431 [2024-11-06 15:46:04.213988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.431 [2024-11-06 15:46:04.228374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.431 [2024-11-06 15:46:04.228388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.431 [2024-11-06 15:46:04.241674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.431 [2024-11-06 15:46:04.241689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.431 [2024-11-06 15:46:04.256740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.431 [2024-11-06 15:46:04.256760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.431 [2024-11-06 15:46:04.270218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.431 [2024-11-06 15:46:04.270233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.431 [2024-11-06 15:46:04.284533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.431 [2024-11-06 15:46:04.284548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.431 [2024-11-06 15:46:04.297376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.431 [2024-11-06 15:46:04.297390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.431 [2024-11-06 15:46:04.310224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.431 [2024-11-06 15:46:04.310239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.431 [2024-11-06 15:46:04.325224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.431 [2024-11-06 15:46:04.325239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.431 [2024-11-06 15:46:04.338436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.431 [2024-11-06 15:46:04.338450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.431 [2024-11-06 15:46:04.352083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.431 [2024-11-06 15:46:04.352098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.431 [2024-11-06 15:46:04.364873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.431 [2024-11-06 15:46:04.364888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.431 [2024-11-06 15:46:04.377529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.431 [2024-11-06 15:46:04.377543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.431 [2024-11-06 15:46:04.392428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.431 [2024-11-06 15:46:04.392443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.431 [2024-11-06 15:46:04.405769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.431 [2024-11-06 15:46:04.405783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.692 [2024-11-06 15:46:04.421134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.692 [2024-11-06 15:46:04.421149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.692 [2024-11-06 15:46:04.434434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.692 [2024-11-06 15:46:04.434449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.692 [2024-11-06 15:46:04.448292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.692 [2024-11-06 15:46:04.448307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.692 [2024-11-06 15:46:04.461141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.692 [2024-11-06 15:46:04.461156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.692 [2024-11-06 15:46:04.474608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.692 [2024-11-06 15:46:04.474623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.692 [2024-11-06 15:46:04.488771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.692 [2024-11-06 15:46:04.488786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.692 [2024-11-06 15:46:04.502045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.692 [2024-11-06 15:46:04.502060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.692 [2024-11-06 15:46:04.516397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.692 [2024-11-06 15:46:04.516412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.692 [2024-11-06 15:46:04.530115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.692 [2024-11-06 15:46:04.530129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.692 [2024-11-06 15:46:04.544276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.692 [2024-11-06 15:46:04.544291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.692 [2024-11-06 15:46:04.557529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.692 [2024-11-06 15:46:04.557544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.692 [2024-11-06 15:46:04.572059] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.692 [2024-11-06 15:46:04.572074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.692 [2024-11-06 15:46:04.585131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.692 [2024-11-06 15:46:04.585146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.692 18870.00 IOPS, 147.42 MiB/s [2024-11-06T14:46:04.675Z] [2024-11-06 15:46:04.598234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.692 [2024-11-06 15:46:04.598248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.692 [2024-11-06 15:46:04.612289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.692 [2024-11-06 15:46:04.612303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.692 [2024-11-06 15:46:04.625997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.692 [2024-11-06 15:46:04.626012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.693 [2024-11-06 15:46:04.641012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.693 [2024-11-06 15:46:04.641027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.693 [2024-11-06 15:46:04.654278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.693 [2024-11-06 15:46:04.654293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.693 [2024-11-06 15:46:04.668770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.693 [2024-11-06 15:46:04.668785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.006 [2024-11-06 15:46:04.681849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.006 [2024-11-06 15:46:04.681863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.006 [2024-11-06 15:46:04.696924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.006 [2024-11-06 15:46:04.696939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.006 [2024-11-06 15:46:04.710757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.006 [2024-11-06 15:46:04.710772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.006 [2024-11-06 15:46:04.724131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.006 [2024-11-06 15:46:04.724146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.006 [2024-11-06 15:46:04.737500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.006 [2024-11-06 15:46:04.737515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.006 [2024-11-06 15:46:04.749968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.006 [2024-11-06 15:46:04.749983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.006 [2024-11-06 15:46:04.764689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.006 [2024-11-06 15:46:04.764704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.006 [2024-11-06 15:46:04.777956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.006 [2024-11-06 15:46:04.777970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.006 [2024-11-06 15:46:04.792411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.006 [2024-11-06 15:46:04.792426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.006 [2024-11-06 15:46:04.805276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.006 [2024-11-06 15:46:04.805291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.006 [2024-11-06 15:46:04.817964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.006 [2024-11-06 15:46:04.817979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.006 [2024-11-06 15:46:04.832242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.006 [2024-11-06 15:46:04.832256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.006 [2024-11-06 15:46:04.845333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.006 [2024-11-06 15:46:04.845349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.006 [2024-11-06 15:46:04.858513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.006 [2024-11-06 15:46:04.858527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.006 [2024-11-06 15:46:04.872119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.006 [2024-11-06 15:46:04.872134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.006 [2024-11-06 15:46:04.884998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.006 [2024-11-06 15:46:04.885012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.006 [2024-11-06 15:46:04.898330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.007 [2024-11-06 15:46:04.898345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.007 [2024-11-06 15:46:04.912109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.007 [2024-11-06 15:46:04.912123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.007 [2024-11-06 15:46:04.925318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.007 [2024-11-06 15:46:04.925333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.007 [2024-11-06 15:46:04.938232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.007 [2024-11-06 15:46:04.938246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.007 [2024-11-06 15:46:04.952357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.007 [2024-11-06 15:46:04.952372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.007 [2024-11-06 15:46:04.965503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.007 [2024-11-06 15:46:04.965516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.287 [2024-11-06 15:46:04.980325] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.287 [2024-11-06 15:46:04.980340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.287 [2024-11-06 15:46:04.993950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.287 [2024-11-06 15:46:04.993965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.287 [2024-11-06 15:46:05.008552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.287 [2024-11-06 15:46:05.008567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.287 [2024-11-06 15:46:05.021630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.287 [2024-11-06 15:46:05.021644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.287 [2024-11-06 15:46:05.036538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.287 [2024-11-06 15:46:05.036552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.287 [2024-11-06 15:46:05.049721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.287 [2024-11-06 15:46:05.049735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.287 [2024-11-06 15:46:05.064365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.287 [2024-11-06 15:46:05.064380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.287 [2024-11-06 15:46:05.077539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.287 [2024-11-06 15:46:05.077553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.287 [2024-11-06 15:46:05.092296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.287 [2024-11-06 15:46:05.092318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.287 [2024-11-06 15:46:05.105420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.287 [2024-11-06 15:46:05.105435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.287 [2024-11-06 15:46:05.118563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.287 [2024-11-06 15:46:05.118578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.287 [2024-11-06 15:46:05.132352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.287 [2024-11-06 15:46:05.132367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.287 [2024-11-06 15:46:05.145803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.287 [2024-11-06 15:46:05.145817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.287 [2024-11-06 15:46:05.160250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.287 [2024-11-06 15:46:05.160265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.287 [2024-11-06 15:46:05.173547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.287 [2024-11-06 15:46:05.173561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.287 [2024-11-06 15:46:05.188101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.287 [2024-11-06 15:46:05.188116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.287 [2024-11-06 15:46:05.201246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.287 [2024-11-06 15:46:05.201261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.287 [2024-11-06 15:46:05.214310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.287 [2024-11-06 15:46:05.214325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.287 [2024-11-06 15:46:05.228581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.287 [2024-11-06 15:46:05.228595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.287 [2024-11-06 15:46:05.241775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.287 [2024-11-06 15:46:05.241789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.287 [2024-11-06 15:46:05.256636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.287 [2024-11-06 15:46:05.256650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.550 [2024-11-06 15:46:05.269605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.550 [2024-11-06 15:46:05.269619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.550 [2024-11-06 15:46:05.284117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.550 [2024-11-06 15:46:05.284132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.550 [2024-11-06 15:46:05.297169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.550 [2024-11-06 15:46:05.297184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.550 [2024-11-06 15:46:05.310186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.550 [2024-11-06 15:46:05.310200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.550 [2024-11-06 15:46:05.324125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.550 [2024-11-06 15:46:05.324140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.550 [2024-11-06 15:46:05.337279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.550 [2024-11-06 15:46:05.337293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.550 [2024-11-06 15:46:05.350174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.550 [2024-11-06 15:46:05.350193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.550 [2024-11-06 15:46:05.364002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.550 [2024-11-06 15:46:05.364017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.550 [2024-11-06 15:46:05.377288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.550 [2024-11-06 15:46:05.377303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.550 [2024-11-06 15:46:05.390342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.550 [2024-11-06 15:46:05.390355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.550 [2024-11-06 15:46:05.404239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.550 [2024-11-06 15:46:05.404253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.550 [2024-11-06 15:46:05.417550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.550 [2024-11-06 15:46:05.417564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.550 [2024-11-06 15:46:05.432162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.550 [2024-11-06 15:46:05.432176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.550 [2024-11-06 15:46:05.444973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.550 [2024-11-06 15:46:05.444987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.550 [2024-11-06 15:46:05.458029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.550 [2024-11-06 15:46:05.458043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.550 [2024-11-06 15:46:05.472174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.550 [2024-11-06 15:46:05.472189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.550 [2024-11-06 15:46:05.485417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.550 [2024-11-06 15:46:05.485432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.550 [2024-11-06 15:46:05.498316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.550 [2024-11-06 15:46:05.498330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.550 [2024-11-06 15:46:05.512513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.550 [2024-11-06 15:46:05.512527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.550 [2024-11-06 15:46:05.525938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.550 [2024-11-06 15:46:05.525951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.811 [2024-11-06 15:46:05.540130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.811 [2024-11-06 15:46:05.540145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.811 [2024-11-06 15:46:05.553506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.811 [2024-11-06 15:46:05.553520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.811 [2024-11-06 15:46:05.568429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.811 [2024-11-06 15:46:05.568444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.811 [2024-11-06 15:46:05.581433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.811 [2024-11-06 15:46:05.581448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.811 18888.60 IOPS, 147.57 MiB/s [2024-11-06T14:46:05.794Z] [2024-11-06 15:46:05.594030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.811 [2024-11-06 15:46:05.594044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.811 00:33:47.811 Latency(us) 00:33:47.811 [2024-11-06T14:46:05.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.811 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:47.811 Nvme1n1 : 5.01 18890.63 147.58 0.00 0.00 6769.36 2894.51 11741.87 00:33:47.811 [2024-11-06T14:46:05.794Z] =================================================================================================================== 00:33:47.811 [2024-11-06T14:46:05.794Z] Total : 18890.63 147.58 0.00 0.00 6769.36 2894.51 11741.87 00:33:47.811 [2024-11-06 15:46:05.605229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.811 [2024-11-06 15:46:05.605242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.811 [2024-11-06 15:46:05.617237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.812 [2024-11-06 15:46:05.617249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.812 [2024-11-06 15:46:05.629230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.812 [2024-11-06 15:46:05.629245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.812 [2024-11-06 15:46:05.641231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.812 [2024-11-06 15:46:05.641244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.812 [2024-11-06 15:46:05.653227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.812 [2024-11-06 15:46:05.653237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.812 [2024-11-06 15:46:05.665226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.812 [2024-11-06 15:46:05.665235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.812 [2024-11-06 15:46:05.677227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.812 [2024-11-06 15:46:05.677237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.812 [2024-11-06 15:46:05.689227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.812 [2024-11-06 15:46:05.689236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.812 [2024-11-06 15:46:05.701238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.812 [2024-11-06 15:46:05.701245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4038314) - No such process 00:33:47.812 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4038314 00:33:47.812 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:47.812 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.812 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:47.812 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.812 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:47.812 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.812 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:47.812 delay0 00:33:47.812 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.812 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:47.812 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.812 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:47.812 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.812 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:48.072 [2024-11-06 15:46:05.905904] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:54.652 Initializing NVMe Controllers 00:33:54.652 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:54.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:54.652 Initialization complete. Launching workers. 00:33:54.652 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1609 00:33:54.652 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1891, failed to submit 38 00:33:54.652 success 1747, unsuccessful 144, failed 0 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:54.652 rmmod nvme_tcp 00:33:54.652 rmmod nvme_fabrics 00:33:54.652 rmmod nvme_keyring 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 4036209 ']' 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 4036209 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 4036209 ']' 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 4036209 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4036209 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4036209' 00:33:54.652 killing process with pid 4036209 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 4036209 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 4036209 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:54.652 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:54.653 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:54.653 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:33:54.653 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:54.653 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:33:54.653 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:54.653 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:54.653 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:54.653 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:54.653 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.565 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:56.565 00:33:56.565 real 0m33.853s 00:33:56.565 user 0m42.723s 00:33:56.565 sys 0m12.557s 00:33:56.565 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:56.565 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:56.565 ************************************ 00:33:56.565 END TEST nvmf_zcopy 00:33:56.565 ************************************ 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:56.827 ************************************ 00:33:56.827 START TEST nvmf_nmic 00:33:56.827 ************************************ 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:56.827 * Looking for test storage... 00:33:56.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:56.827 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:57.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.089 --rc genhtml_branch_coverage=1 00:33:57.089 --rc genhtml_function_coverage=1 00:33:57.089 --rc genhtml_legend=1 00:33:57.089 --rc geninfo_all_blocks=1 00:33:57.089 --rc geninfo_unexecuted_blocks=1 00:33:57.089 00:33:57.089 ' 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:57.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.089 --rc genhtml_branch_coverage=1 00:33:57.089 --rc genhtml_function_coverage=1 00:33:57.089 --rc genhtml_legend=1 00:33:57.089 --rc geninfo_all_blocks=1 00:33:57.089 --rc geninfo_unexecuted_blocks=1 00:33:57.089 00:33:57.089 ' 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:57.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.089 --rc genhtml_branch_coverage=1 00:33:57.089 --rc genhtml_function_coverage=1 00:33:57.089 --rc genhtml_legend=1 00:33:57.089 --rc geninfo_all_blocks=1 00:33:57.089 --rc geninfo_unexecuted_blocks=1 00:33:57.089 00:33:57.089 ' 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:57.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.089 --rc genhtml_branch_coverage=1 00:33:57.089 --rc genhtml_function_coverage=1 00:33:57.089 --rc genhtml_legend=1 00:33:57.089 --rc geninfo_all_blocks=1 00:33:57.089 --rc geninfo_unexecuted_blocks=1 00:33:57.089 00:33:57.089 ' 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:57.089 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:57.090 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:05.226 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:05.226 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:05.226 Found net devices under 0000:31:00.0: cvl_0_0 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.226 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:05.227 Found net devices under 0000:31:00.1: cvl_0_1 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:05.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:05.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:34:05.227 00:34:05.227 --- 10.0.0.2 ping statistics --- 00:34:05.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.227 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:05.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:05.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:34:05.227 00:34:05.227 --- 10.0.0.1 ping statistics --- 00:34:05.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.227 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=4044729 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 4044729 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 4044729 ']' 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:05.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:05.227 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.227 [2024-11-06 15:46:22.457347] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:05.227 [2024-11-06 15:46:22.458497] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:34:05.227 [2024-11-06 15:46:22.458547] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:05.227 [2024-11-06 15:46:22.558107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:05.227 [2024-11-06 15:46:22.612612] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:05.227 [2024-11-06 15:46:22.612664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:05.227 [2024-11-06 15:46:22.612672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:05.227 [2024-11-06 15:46:22.612679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:05.227 [2024-11-06 15:46:22.612686] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:05.227 [2024-11-06 15:46:22.614957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:05.227 [2024-11-06 15:46:22.615131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:05.227 [2024-11-06 15:46:22.615288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:05.227 [2024-11-06 15:46:22.615290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:05.227 [2024-11-06 15:46:22.694424] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:05.227 [2024-11-06 15:46:22.695505] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:05.227 [2024-11-06 15:46:22.695764] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:05.227 [2024-11-06 15:46:22.696246] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:05.227 [2024-11-06 15:46:22.696273] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.488 [2024-11-06 15:46:23.324291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.488 Malloc0 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.488 [2024-11-06 15:46:23.416613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:05.488 test case1: single bdev can't be used in multiple subsystems 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.488 [2024-11-06 15:46:23.451933] bdev.c:8462:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:05.488 [2024-11-06 15:46:23.451957] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:05.488 [2024-11-06 15:46:23.451966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.488 request: 00:34:05.488 { 00:34:05.488 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:05.488 "namespace": { 00:34:05.488 "bdev_name": "Malloc0", 00:34:05.488 "no_auto_visible": false, 00:34:05.488 "no_metadata": false 00:34:05.488 }, 00:34:05.488 "method": "nvmf_subsystem_add_ns", 00:34:05.488 "req_id": 1 00:34:05.488 } 00:34:05.488 Got JSON-RPC error response 00:34:05.488 response: 00:34:05.488 { 00:34:05.488 "code": -32602, 00:34:05.488 "message": "Invalid parameters" 00:34:05.488 } 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:05.488 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:05.489 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:05.489 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:05.489 Adding namespace failed - expected result. 00:34:05.489 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:05.489 test case2: host connect to nvmf target in multiple paths 00:34:05.489 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:05.489 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.489 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.489 [2024-11-06 15:46:23.464060] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:05.489 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.489 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:06.060 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:06.633 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:06.633 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:34:06.633 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:34:06.633 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:34:06.633 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:34:08.548 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:08.548 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:08.548 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:08.548 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:34:08.548 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:08.548 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:34:08.548 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:08.548 [global] 00:34:08.548 thread=1 00:34:08.548 invalidate=1 00:34:08.548 rw=write 00:34:08.548 time_based=1 00:34:08.548 runtime=1 00:34:08.548 ioengine=libaio 00:34:08.548 direct=1 00:34:08.548 bs=4096 00:34:08.548 iodepth=1 00:34:08.548 norandommap=0 00:34:08.548 numjobs=1 00:34:08.548 00:34:08.548 verify_dump=1 00:34:08.548 verify_backlog=512 00:34:08.548 verify_state_save=0 00:34:08.548 do_verify=1 00:34:08.548 verify=crc32c-intel 00:34:08.548 [job0] 00:34:08.548 filename=/dev/nvme0n1 00:34:08.548 Could not set queue depth (nvme0n1) 00:34:09.116 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:09.116 fio-3.35 00:34:09.116 Starting 1 thread 00:34:10.057 00:34:10.057 job0: (groupid=0, jobs=1): err= 0: pid=4045893: Wed Nov 6 15:46:27 2024 00:34:10.057 read: IOPS=17, BW=71.5KiB/s (73.2kB/s)(72.0KiB/1007msec) 00:34:10.057 slat (nsec): min=26890, max=29930, avg=27413.11, stdev=722.18 00:34:10.057 clat (usec): min=1028, max=42022, avg=37289.72, stdev=13192.31 00:34:10.057 lat (usec): min=1058, max=42049, avg=37317.14, stdev=13191.94 00:34:10.057 clat percentiles (usec): 00:34:10.057 | 1.00th=[ 1029], 5.00th=[ 1029], 10.00th=[ 1045], 20.00th=[41157], 00:34:10.057 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:10.057 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:10.057 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:10.057 | 99.99th=[42206] 00:34:10.057 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:34:10.057 slat (usec): min=9, max=27818, avg=86.41, stdev=1228.02 00:34:10.057 clat (usec): min=234, max=864, avg=555.31, stdev=85.08 00:34:10.057 lat (usec): min=245, max=28299, avg=641.72, stdev=1227.97 00:34:10.057 clat percentiles (usec): 00:34:10.057 | 1.00th=[ 330], 5.00th=[ 408], 10.00th=[ 441], 20.00th=[ 494], 00:34:10.057 | 30.00th=[ 529], 40.00th=[ 537], 50.00th=[ 553], 60.00th=[ 586], 00:34:10.057 | 70.00th=[ 611], 80.00th=[ 627], 90.00th=[ 652], 95.00th=[ 676], 00:34:10.057 | 99.00th=[ 734], 99.50th=[ 758], 99.90th=[ 865], 99.95th=[ 865], 00:34:10.057 | 99.99th=[ 865] 00:34:10.057 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:10.057 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:10.057 lat (usec) : 250=0.19%, 500=20.57%, 750=75.09%, 1000=0.75% 00:34:10.057 lat (msec) : 2=0.38%, 50=3.02% 00:34:10.057 cpu : usr=0.60%, sys=2.58%, ctx=535, majf=0, minf=1 00:34:10.057 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.057 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.057 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:10.057 00:34:10.057 Run status group 0 (all jobs): 00:34:10.057 READ: bw=71.5KiB/s (73.2kB/s), 71.5KiB/s-71.5KiB/s (73.2kB/s-73.2kB/s), io=72.0KiB (73.7kB), run=1007-1007msec 00:34:10.057 WRITE: bw=2034KiB/s (2083kB/s), 2034KiB/s-2034KiB/s (2083kB/s-2083kB/s), io=2048KiB (2097kB), run=1007-1007msec 00:34:10.057 00:34:10.057 Disk stats (read/write): 00:34:10.057 nvme0n1: ios=39/512, merge=0/0, ticks=1512/223, in_queue=1735, util=98.50% 00:34:10.057 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:10.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:10.319 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:10.319 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:34:10.319 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:10.319 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:10.319 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:10.319 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:10.319 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:34:10.319 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:10.319 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:10.319 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:10.319 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:10.319 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:10.319 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:10.319 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:10.319 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:10.319 rmmod nvme_tcp 00:34:10.319 rmmod nvme_fabrics 00:34:10.319 rmmod nvme_keyring 00:34:10.319 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:10.320 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:10.320 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:10.320 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 4044729 ']' 00:34:10.320 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 4044729 00:34:10.320 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 4044729 ']' 00:34:10.320 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 4044729 00:34:10.320 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:34:10.320 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:10.320 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4044729 00:34:10.581 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:10.581 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:10.581 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4044729' 00:34:10.581 killing process with pid 4044729 00:34:10.581 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 4044729 00:34:10.581 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 4044729 00:34:10.581 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:10.581 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:10.581 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:10.581 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:10.581 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:10.581 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:10.581 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:10.581 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:10.581 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:10.581 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.581 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.581 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:13.139 00:34:13.139 real 0m15.937s 00:34:13.139 user 0m33.239s 00:34:13.139 sys 0m7.418s 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:13.139 ************************************ 00:34:13.139 END TEST nvmf_nmic 00:34:13.139 ************************************ 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:13.139 ************************************ 00:34:13.139 START TEST nvmf_fio_target 00:34:13.139 ************************************ 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:13.139 * Looking for test storage... 00:34:13.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:13.139 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:13.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.140 --rc genhtml_branch_coverage=1 00:34:13.140 --rc genhtml_function_coverage=1 00:34:13.140 --rc genhtml_legend=1 00:34:13.140 --rc geninfo_all_blocks=1 00:34:13.140 --rc geninfo_unexecuted_blocks=1 00:34:13.140 00:34:13.140 ' 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:13.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.140 --rc genhtml_branch_coverage=1 00:34:13.140 --rc genhtml_function_coverage=1 00:34:13.140 --rc genhtml_legend=1 00:34:13.140 --rc geninfo_all_blocks=1 00:34:13.140 --rc geninfo_unexecuted_blocks=1 00:34:13.140 00:34:13.140 ' 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:13.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.140 --rc genhtml_branch_coverage=1 00:34:13.140 --rc genhtml_function_coverage=1 00:34:13.140 --rc genhtml_legend=1 00:34:13.140 --rc geninfo_all_blocks=1 00:34:13.140 --rc geninfo_unexecuted_blocks=1 00:34:13.140 00:34:13.140 ' 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:13.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.140 --rc genhtml_branch_coverage=1 00:34:13.140 --rc genhtml_function_coverage=1 00:34:13.140 --rc genhtml_legend=1 00:34:13.140 --rc geninfo_all_blocks=1 00:34:13.140 --rc geninfo_unexecuted_blocks=1 00:34:13.140 00:34:13.140 ' 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:13.140 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.141 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:13.141 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.141 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:13.141 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:13.141 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:13.141 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:21.288 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:21.288 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:21.288 Found net devices under 0000:31:00.0: cvl_0_0 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:21.288 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:21.289 Found net devices under 0000:31:00.1: cvl_0_1 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:21.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:21.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:34:21.289 00:34:21.289 --- 10.0.0.2 ping statistics --- 00:34:21.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:21.289 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:21.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:21.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:34:21.289 00:34:21.289 --- 10.0.0.1 ping statistics --- 00:34:21.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:21.289 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=4050259 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 4050259 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 4050259 ']' 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:21.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:21.289 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:21.289 [2024-11-06 15:46:38.512995] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:21.289 [2024-11-06 15:46:38.514122] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:34:21.289 [2024-11-06 15:46:38.514172] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:21.289 [2024-11-06 15:46:38.615918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:21.289 [2024-11-06 15:46:38.669851] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:21.289 [2024-11-06 15:46:38.669926] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:21.289 [2024-11-06 15:46:38.669935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:21.289 [2024-11-06 15:46:38.669942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:21.289 [2024-11-06 15:46:38.669949] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:21.289 [2024-11-06 15:46:38.672020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:21.289 [2024-11-06 15:46:38.672181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:21.289 [2024-11-06 15:46:38.672317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:21.289 [2024-11-06 15:46:38.672318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:21.289 [2024-11-06 15:46:38.750917] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:21.289 [2024-11-06 15:46:38.752268] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:21.289 [2024-11-06 15:46:38.752307] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:21.289 [2024-11-06 15:46:38.752817] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:21.289 [2024-11-06 15:46:38.752842] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:21.551 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:21.551 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:34:21.551 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:21.551 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:21.551 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:21.551 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:21.551 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:21.551 [2024-11-06 15:46:39.529363] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:21.811 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:22.071 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:22.071 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:22.071 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:22.072 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:22.332 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:22.333 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:22.594 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:22.594 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:22.855 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:22.855 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:22.855 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:23.115 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:23.115 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:23.376 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:23.377 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:23.637 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:23.637 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:23.637 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:23.898 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:23.898 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:24.159 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:24.159 [2024-11-06 15:46:42.109289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:24.420 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:24.420 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:24.681 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:25.255 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:25.255 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:34:25.255 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:34:25.255 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:34:25.255 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:34:25.255 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:34:27.168 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:27.168 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:27.168 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:27.168 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:34:27.168 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:27.168 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:34:27.168 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:27.168 [global] 00:34:27.168 thread=1 00:34:27.168 invalidate=1 00:34:27.168 rw=write 00:34:27.168 time_based=1 00:34:27.168 runtime=1 00:34:27.168 ioengine=libaio 00:34:27.168 direct=1 00:34:27.168 bs=4096 00:34:27.168 iodepth=1 00:34:27.168 norandommap=0 00:34:27.168 numjobs=1 00:34:27.168 00:34:27.168 verify_dump=1 00:34:27.168 verify_backlog=512 00:34:27.168 verify_state_save=0 00:34:27.168 do_verify=1 00:34:27.168 verify=crc32c-intel 00:34:27.168 [job0] 00:34:27.168 filename=/dev/nvme0n1 00:34:27.168 [job1] 00:34:27.168 filename=/dev/nvme0n2 00:34:27.168 [job2] 00:34:27.168 filename=/dev/nvme0n3 00:34:27.168 [job3] 00:34:27.168 filename=/dev/nvme0n4 00:34:27.451 Could not set queue depth (nvme0n1) 00:34:27.451 Could not set queue depth (nvme0n2) 00:34:27.451 Could not set queue depth (nvme0n3) 00:34:27.451 Could not set queue depth (nvme0n4) 00:34:27.711 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:27.711 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:27.711 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:27.712 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:27.712 fio-3.35 00:34:27.712 Starting 4 threads 00:34:29.176 00:34:29.176 job0: (groupid=0, jobs=1): err= 0: pid=4051842: Wed Nov 6 15:46:46 2024 00:34:29.176 read: IOPS=18, BW=74.4KiB/s (76.2kB/s)(76.0KiB/1021msec) 00:34:29.176 slat (nsec): min=26073, max=27278, avg=26659.74, stdev=236.60 00:34:29.176 clat (usec): min=40824, max=41920, avg=41056.99, stdev=268.03 00:34:29.176 lat (usec): min=40851, max=41947, avg=41083.65, stdev=268.09 00:34:29.176 clat percentiles (usec): 00:34:29.176 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:34:29.176 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:29.176 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:34:29.176 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:34:29.176 | 99.99th=[41681] 00:34:29.176 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:34:29.176 slat (nsec): min=8625, max=61960, avg=28823.80, stdev=11094.43 00:34:29.176 clat (usec): min=246, max=2507, avg=433.14, stdev=115.45 00:34:29.176 lat (usec): min=255, max=2555, avg=461.97, stdev=118.95 00:34:29.176 clat percentiles (usec): 00:34:29.176 | 1.00th=[ 258], 5.00th=[ 297], 10.00th=[ 334], 20.00th=[ 359], 00:34:29.176 | 30.00th=[ 396], 40.00th=[ 429], 50.00th=[ 445], 60.00th=[ 457], 00:34:29.176 | 70.00th=[ 469], 80.00th=[ 486], 90.00th=[ 506], 95.00th=[ 537], 00:34:29.176 | 99.00th=[ 586], 99.50th=[ 594], 99.90th=[ 2507], 99.95th=[ 2507], 00:34:29.176 | 99.99th=[ 2507] 00:34:29.176 bw ( KiB/s): min= 4096, max= 4096, per=40.67%, avg=4096.00, stdev= 0.00, samples=1 00:34:29.176 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:29.176 lat (usec) : 250=0.56%, 500=83.62%, 750=12.05% 00:34:29.176 lat (msec) : 4=0.19%, 50=3.58% 00:34:29.176 cpu : usr=1.37%, sys=1.47%, ctx=531, majf=0, minf=1 00:34:29.176 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.176 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.176 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:29.176 job1: (groupid=0, jobs=1): err= 0: pid=4051844: Wed Nov 6 15:46:46 2024 00:34:29.176 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:29.176 slat (nsec): min=24272, max=58542, avg=25466.52, stdev=2655.77 00:34:29.176 clat (usec): min=671, max=1254, avg=982.04, stdev=89.08 00:34:29.176 lat (usec): min=696, max=1279, avg=1007.51, stdev=89.17 00:34:29.176 clat percentiles (usec): 00:34:29.176 | 1.00th=[ 766], 5.00th=[ 832], 10.00th=[ 873], 20.00th=[ 922], 00:34:29.176 | 30.00th=[ 938], 40.00th=[ 963], 50.00th=[ 988], 60.00th=[ 996], 00:34:29.176 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1090], 95.00th=[ 1123], 00:34:29.176 | 99.00th=[ 1205], 99.50th=[ 1237], 99.90th=[ 1254], 99.95th=[ 1254], 00:34:29.176 | 99.99th=[ 1254] 00:34:29.176 write: IOPS=762, BW=3049KiB/s (3122kB/s)(3052KiB/1001msec); 0 zone resets 00:34:29.176 slat (nsec): min=9612, max=52443, avg=29542.12, stdev=8770.74 00:34:29.176 clat (usec): min=217, max=982, avg=592.37, stdev=132.40 00:34:29.176 lat (usec): min=228, max=1015, avg=621.91, stdev=135.64 00:34:29.176 clat percentiles (usec): 00:34:29.176 | 1.00th=[ 281], 5.00th=[ 363], 10.00th=[ 408], 20.00th=[ 482], 00:34:29.176 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 627], 00:34:29.176 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 750], 95.00th=[ 791], 00:34:29.176 | 99.00th=[ 898], 99.50th=[ 930], 99.90th=[ 979], 99.95th=[ 979], 00:34:29.176 | 99.99th=[ 979] 00:34:29.176 bw ( KiB/s): min= 4096, max= 4096, per=40.67%, avg=4096.00, stdev= 0.00, samples=1 00:34:29.176 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:29.176 lat (usec) : 250=0.24%, 500=13.57%, 750=39.92%, 1000=30.59% 00:34:29.176 lat (msec) : 2=15.69% 00:34:29.176 cpu : usr=2.10%, sys=3.50%, ctx=1276, majf=0, minf=1 00:34:29.176 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.176 issued rwts: total=512,763,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.176 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:29.176 job2: (groupid=0, jobs=1): err= 0: pid=4051845: Wed Nov 6 15:46:46 2024 00:34:29.176 read: IOPS=64, BW=257KiB/s (263kB/s)(260KiB/1011msec) 00:34:29.176 slat (nsec): min=7039, max=29862, avg=22371.20, stdev=8179.85 00:34:29.176 clat (usec): min=462, max=42036, avg=11429.80, stdev=17998.77 00:34:29.176 lat (usec): min=483, max=42064, avg=11452.17, stdev=18001.90 00:34:29.176 clat percentiles (usec): 00:34:29.176 | 1.00th=[ 461], 5.00th=[ 537], 10.00th=[ 635], 20.00th=[ 685], 00:34:29.176 | 30.00th=[ 783], 40.00th=[ 824], 50.00th=[ 914], 60.00th=[ 947], 00:34:29.176 | 70.00th=[ 979], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:34:29.176 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:29.176 | 99.99th=[42206] 00:34:29.176 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:34:29.176 slat (nsec): min=9647, max=67361, avg=33682.13, stdev=8737.32 00:34:29.176 clat (usec): min=148, max=823, avg=479.42, stdev=103.55 00:34:29.176 lat (usec): min=183, max=860, avg=513.10, stdev=106.36 00:34:29.177 clat percentiles (usec): 00:34:29.177 | 1.00th=[ 255], 5.00th=[ 314], 10.00th=[ 347], 20.00th=[ 392], 00:34:29.177 | 30.00th=[ 420], 40.00th=[ 449], 50.00th=[ 474], 60.00th=[ 506], 00:34:29.177 | 70.00th=[ 537], 80.00th=[ 578], 90.00th=[ 611], 95.00th=[ 652], 00:34:29.177 | 99.00th=[ 709], 99.50th=[ 734], 99.90th=[ 824], 99.95th=[ 824], 00:34:29.177 | 99.99th=[ 824] 00:34:29.177 bw ( KiB/s): min= 4096, max= 4096, per=40.67%, avg=4096.00, stdev= 0.00, samples=1 00:34:29.177 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:29.177 lat (usec) : 250=0.69%, 500=51.13%, 750=39.34%, 1000=5.72% 00:34:29.177 lat (msec) : 2=0.17%, 50=2.95% 00:34:29.177 cpu : usr=0.69%, sys=2.77%, ctx=578, majf=0, minf=1 00:34:29.177 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.177 issued rwts: total=65,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.177 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:29.177 job3: (groupid=0, jobs=1): err= 0: pid=4051846: Wed Nov 6 15:46:46 2024 00:34:29.177 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:29.177 slat (nsec): min=27155, max=46394, avg=27807.70, stdev=1449.80 00:34:29.177 clat (usec): min=535, max=1206, avg=960.36, stdev=77.95 00:34:29.177 lat (usec): min=563, max=1234, avg=988.17, stdev=77.85 00:34:29.177 clat percentiles (usec): 00:34:29.177 | 1.00th=[ 701], 5.00th=[ 840], 10.00th=[ 873], 20.00th=[ 914], 00:34:29.177 | 30.00th=[ 930], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 979], 00:34:29.177 | 70.00th=[ 996], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1074], 00:34:29.177 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1205], 99.95th=[ 1205], 00:34:29.177 | 99.99th=[ 1205] 00:34:29.177 write: IOPS=783, BW=3133KiB/s (3208kB/s)(3136KiB/1001msec); 0 zone resets 00:34:29.177 slat (nsec): min=9667, max=69893, avg=32835.54, stdev=9713.67 00:34:29.177 clat (usec): min=256, max=879, avg=584.51, stdev=111.71 00:34:29.177 lat (usec): min=267, max=915, avg=617.35, stdev=115.32 00:34:29.177 clat percentiles (usec): 00:34:29.177 | 1.00th=[ 306], 5.00th=[ 379], 10.00th=[ 445], 20.00th=[ 490], 00:34:29.177 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 619], 00:34:29.177 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 758], 00:34:29.177 | 99.00th=[ 816], 99.50th=[ 824], 99.90th=[ 881], 99.95th=[ 881], 00:34:29.177 | 99.99th=[ 881] 00:34:29.177 bw ( KiB/s): min= 4096, max= 4096, per=40.67%, avg=4096.00, stdev= 0.00, samples=1 00:34:29.177 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:29.177 lat (usec) : 500=13.66%, 750=43.83%, 1000=31.17% 00:34:29.177 lat (msec) : 2=11.34% 00:34:29.177 cpu : usr=3.20%, sys=4.80%, ctx=1298, majf=0, minf=1 00:34:29.177 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.177 issued rwts: total=512,784,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.177 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:29.177 00:34:29.177 Run status group 0 (all jobs): 00:34:29.177 READ: bw=4341KiB/s (4445kB/s), 74.4KiB/s-2046KiB/s (76.2kB/s-2095kB/s), io=4432KiB (4538kB), run=1001-1021msec 00:34:29.177 WRITE: bw=9.84MiB/s (10.3MB/s), 2006KiB/s-3133KiB/s (2054kB/s-3208kB/s), io=10.0MiB (10.5MB), run=1001-1021msec 00:34:29.177 00:34:29.177 Disk stats (read/write): 00:34:29.177 nvme0n1: ios=64/512, merge=0/0, ticks=609/172, in_queue=781, util=86.37% 00:34:29.177 nvme0n2: ios=540/512, merge=0/0, ticks=571/294, in_queue=865, util=92.66% 00:34:29.177 nvme0n3: ios=83/512, merge=0/0, ticks=1500/205, in_queue=1705, util=97.58% 00:34:29.177 nvme0n4: ios=535/524, merge=0/0, ticks=1414/236, in_queue=1650, util=97.34% 00:34:29.177 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:29.177 [global] 00:34:29.177 thread=1 00:34:29.177 invalidate=1 00:34:29.177 rw=randwrite 00:34:29.177 time_based=1 00:34:29.177 runtime=1 00:34:29.177 ioengine=libaio 00:34:29.177 direct=1 00:34:29.177 bs=4096 00:34:29.177 iodepth=1 00:34:29.177 norandommap=0 00:34:29.177 numjobs=1 00:34:29.177 00:34:29.177 verify_dump=1 00:34:29.177 verify_backlog=512 00:34:29.177 verify_state_save=0 00:34:29.177 do_verify=1 00:34:29.177 verify=crc32c-intel 00:34:29.177 [job0] 00:34:29.177 filename=/dev/nvme0n1 00:34:29.177 [job1] 00:34:29.177 filename=/dev/nvme0n2 00:34:29.177 [job2] 00:34:29.177 filename=/dev/nvme0n3 00:34:29.177 [job3] 00:34:29.177 filename=/dev/nvme0n4 00:34:29.177 Could not set queue depth (nvme0n1) 00:34:29.177 Could not set queue depth (nvme0n2) 00:34:29.177 Could not set queue depth (nvme0n3) 00:34:29.177 Could not set queue depth (nvme0n4) 00:34:29.438 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:29.438 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:29.438 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:29.438 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:29.438 fio-3.35 00:34:29.438 Starting 4 threads 00:34:30.382 00:34:30.382 job0: (groupid=0, jobs=1): err= 0: pid=4052357: Wed Nov 6 15:46:48 2024 00:34:30.382 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:30.382 slat (nsec): min=25179, max=58934, avg=26666.07, stdev=3383.92 00:34:30.382 clat (usec): min=501, max=1380, avg=1023.19, stdev=106.80 00:34:30.382 lat (usec): min=527, max=1406, avg=1049.86, stdev=106.71 00:34:30.382 clat percentiles (usec): 00:34:30.382 | 1.00th=[ 742], 5.00th=[ 840], 10.00th=[ 898], 20.00th=[ 930], 00:34:30.382 | 30.00th=[ 963], 40.00th=[ 1004], 50.00th=[ 1037], 60.00th=[ 1057], 00:34:30.382 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1139], 95.00th=[ 1188], 00:34:30.382 | 99.00th=[ 1237], 99.50th=[ 1237], 99.90th=[ 1385], 99.95th=[ 1385], 00:34:30.382 | 99.99th=[ 1385] 00:34:30.382 write: IOPS=654, BW=2617KiB/s (2680kB/s)(2620KiB/1001msec); 0 zone resets 00:34:30.382 slat (nsec): min=4065, max=59108, avg=30464.57, stdev=9042.41 00:34:30.382 clat (usec): min=243, max=1183, avg=660.49, stdev=152.39 00:34:30.382 lat (usec): min=258, max=1187, avg=690.96, stdev=154.72 00:34:30.382 clat percentiles (usec): 00:34:30.382 | 1.00th=[ 302], 5.00th=[ 408], 10.00th=[ 457], 20.00th=[ 537], 00:34:30.382 | 30.00th=[ 586], 40.00th=[ 635], 50.00th=[ 660], 60.00th=[ 701], 00:34:30.382 | 70.00th=[ 742], 80.00th=[ 783], 90.00th=[ 857], 95.00th=[ 914], 00:34:30.382 | 99.00th=[ 996], 99.50th=[ 1029], 99.90th=[ 1188], 99.95th=[ 1188], 00:34:30.382 | 99.99th=[ 1188] 00:34:30.382 bw ( KiB/s): min= 4096, max= 4096, per=44.98%, avg=4096.00, stdev= 0.00, samples=1 00:34:30.382 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:30.382 lat (usec) : 250=0.09%, 500=8.31%, 750=33.33%, 1000=31.28% 00:34:30.382 lat (msec) : 2=26.99% 00:34:30.382 cpu : usr=1.50%, sys=3.80%, ctx=1171, majf=0, minf=1 00:34:30.382 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:30.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.382 issued rwts: total=512,655,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.382 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:30.382 job1: (groupid=0, jobs=1): err= 0: pid=4052360: Wed Nov 6 15:46:48 2024 00:34:30.382 read: IOPS=16, BW=66.9KiB/s (68.5kB/s)(68.0KiB/1016msec) 00:34:30.382 slat (nsec): min=8004, max=28570, avg=26742.71, stdev=4833.82 00:34:30.382 clat (usec): min=1020, max=42023, avg=39440.12, stdev=9905.17 00:34:30.382 lat (usec): min=1048, max=42051, avg=39466.86, stdev=9904.95 00:34:30.382 clat percentiles (usec): 00:34:30.382 | 1.00th=[ 1020], 5.00th=[ 1020], 10.00th=[41157], 20.00th=[41681], 00:34:30.382 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:30.382 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:30.382 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:30.382 | 99.99th=[42206] 00:34:30.382 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:34:30.382 slat (nsec): min=9385, max=56786, avg=33001.99, stdev=8826.68 00:34:30.382 clat (usec): min=178, max=965, avg=631.40, stdev=128.54 00:34:30.382 lat (usec): min=213, max=1019, avg=664.40, stdev=131.49 00:34:30.382 clat percentiles (usec): 00:34:30.382 | 1.00th=[ 285], 5.00th=[ 420], 10.00th=[ 461], 20.00th=[ 529], 00:34:30.382 | 30.00th=[ 570], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 668], 00:34:30.382 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 840], 00:34:30.382 | 99.00th=[ 930], 99.50th=[ 947], 99.90th=[ 963], 99.95th=[ 963], 00:34:30.382 | 99.99th=[ 963] 00:34:30.382 bw ( KiB/s): min= 4096, max= 4096, per=44.98%, avg=4096.00, stdev= 0.00, samples=1 00:34:30.382 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:30.382 lat (usec) : 250=0.38%, 500=13.80%, 750=66.54%, 1000=16.07% 00:34:30.382 lat (msec) : 2=0.19%, 50=3.02% 00:34:30.382 cpu : usr=1.28%, sys=1.97%, ctx=531, majf=0, minf=1 00:34:30.382 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:30.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.382 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.382 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:30.382 job2: (groupid=0, jobs=1): err= 0: pid=4052362: Wed Nov 6 15:46:48 2024 00:34:30.382 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:30.382 slat (nsec): min=24484, max=43728, avg=25993.92, stdev=2252.73 00:34:30.382 clat (usec): min=664, max=1457, avg=1041.51, stdev=108.26 00:34:30.382 lat (usec): min=690, max=1482, avg=1067.51, stdev=108.33 00:34:30.382 clat percentiles (usec): 00:34:30.382 | 1.00th=[ 775], 5.00th=[ 840], 10.00th=[ 898], 20.00th=[ 955], 00:34:30.382 | 30.00th=[ 996], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1074], 00:34:30.382 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1172], 95.00th=[ 1205], 00:34:30.382 | 99.00th=[ 1287], 99.50th=[ 1319], 99.90th=[ 1450], 99.95th=[ 1450], 00:34:30.382 | 99.99th=[ 1450] 00:34:30.382 write: IOPS=633, BW=2533KiB/s (2594kB/s)(2536KiB/1001msec); 0 zone resets 00:34:30.382 slat (nsec): min=9598, max=73016, avg=30642.33, stdev=6476.43 00:34:30.382 clat (usec): min=182, max=1131, avg=670.46, stdev=159.65 00:34:30.382 lat (usec): min=191, max=1163, avg=701.10, stdev=161.00 00:34:30.382 clat percentiles (usec): 00:34:30.382 | 1.00th=[ 302], 5.00th=[ 420], 10.00th=[ 469], 20.00th=[ 545], 00:34:30.382 | 30.00th=[ 594], 40.00th=[ 635], 50.00th=[ 660], 60.00th=[ 701], 00:34:30.382 | 70.00th=[ 734], 80.00th=[ 783], 90.00th=[ 898], 95.00th=[ 963], 00:34:30.382 | 99.00th=[ 1057], 99.50th=[ 1106], 99.90th=[ 1139], 99.95th=[ 1139], 00:34:30.382 | 99.99th=[ 1139] 00:34:30.382 bw ( KiB/s): min= 4096, max= 4096, per=44.98%, avg=4096.00, stdev= 0.00, samples=1 00:34:30.382 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:30.382 lat (usec) : 250=0.26%, 500=7.77%, 750=32.64%, 1000=27.31% 00:34:30.382 lat (msec) : 2=32.02% 00:34:30.382 cpu : usr=1.20%, sys=3.90%, ctx=1147, majf=0, minf=2 00:34:30.382 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:30.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.382 issued rwts: total=512,634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.382 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:30.382 job3: (groupid=0, jobs=1): err= 0: pid=4052363: Wed Nov 6 15:46:48 2024 00:34:30.382 read: IOPS=15, BW=63.6KiB/s (65.1kB/s)(64.0KiB/1007msec) 00:34:30.382 slat (nsec): min=26078, max=27026, avg=26325.31, stdev=253.05 00:34:30.382 clat (usec): min=40863, max=42046, avg=41646.07, stdev=438.44 00:34:30.382 lat (usec): min=40890, max=42073, avg=41672.40, stdev=438.51 00:34:30.382 clat percentiles (usec): 00:34:30.382 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:30.382 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:34:30.382 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:30.382 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:30.382 | 99.99th=[42206] 00:34:30.382 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:34:30.382 slat (nsec): min=10149, max=56954, avg=33268.54, stdev=6618.94 00:34:30.382 clat (usec): min=231, max=1118, avg=621.40, stdev=165.47 00:34:30.382 lat (usec): min=255, max=1167, avg=654.66, stdev=167.03 00:34:30.382 clat percentiles (usec): 00:34:30.382 | 1.00th=[ 253], 5.00th=[ 326], 10.00th=[ 396], 20.00th=[ 482], 00:34:30.382 | 30.00th=[ 529], 40.00th=[ 578], 50.00th=[ 627], 60.00th=[ 668], 00:34:30.382 | 70.00th=[ 709], 80.00th=[ 750], 90.00th=[ 832], 95.00th=[ 914], 00:34:30.382 | 99.00th=[ 996], 99.50th=[ 1020], 99.90th=[ 1123], 99.95th=[ 1123], 00:34:30.382 | 99.99th=[ 1123] 00:34:30.382 bw ( KiB/s): min= 4096, max= 4096, per=44.98%, avg=4096.00, stdev= 0.00, samples=1 00:34:30.382 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:30.382 lat (usec) : 250=0.76%, 500=21.97%, 750=53.60%, 1000=19.70% 00:34:30.382 lat (msec) : 2=0.95%, 50=3.03% 00:34:30.382 cpu : usr=0.99%, sys=1.49%, ctx=529, majf=0, minf=1 00:34:30.382 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:30.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.382 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.382 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:30.382 00:34:30.382 Run status group 0 (all jobs): 00:34:30.382 READ: bw=4161KiB/s (4261kB/s), 63.6KiB/s-2046KiB/s (65.1kB/s-2095kB/s), io=4228KiB (4329kB), run=1001-1016msec 00:34:30.382 WRITE: bw=9106KiB/s (9325kB/s), 2016KiB/s-2617KiB/s (2064kB/s-2680kB/s), io=9252KiB (9474kB), run=1001-1016msec 00:34:30.382 00:34:30.383 Disk stats (read/write): 00:34:30.383 nvme0n1: ios=480/512, merge=0/0, ticks=1433/333, in_queue=1766, util=97.29% 00:34:30.383 nvme0n2: ios=42/512, merge=0/0, ticks=672/263, in_queue=935, util=99.08% 00:34:30.383 nvme0n3: ios=486/512, merge=0/0, ticks=560/340, in_queue=900, util=92.24% 00:34:30.383 nvme0n4: ios=38/512, merge=0/0, ticks=1428/296, in_queue=1724, util=97.56% 00:34:30.643 15:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:30.643 [global] 00:34:30.643 thread=1 00:34:30.643 invalidate=1 00:34:30.643 rw=write 00:34:30.643 time_based=1 00:34:30.643 runtime=1 00:34:30.643 ioengine=libaio 00:34:30.643 direct=1 00:34:30.643 bs=4096 00:34:30.644 iodepth=128 00:34:30.644 norandommap=0 00:34:30.644 numjobs=1 00:34:30.644 00:34:30.644 verify_dump=1 00:34:30.644 verify_backlog=512 00:34:30.644 verify_state_save=0 00:34:30.644 do_verify=1 00:34:30.644 verify=crc32c-intel 00:34:30.644 [job0] 00:34:30.644 filename=/dev/nvme0n1 00:34:30.644 [job1] 00:34:30.644 filename=/dev/nvme0n2 00:34:30.644 [job2] 00:34:30.644 filename=/dev/nvme0n3 00:34:30.644 [job3] 00:34:30.644 filename=/dev/nvme0n4 00:34:30.644 Could not set queue depth (nvme0n1) 00:34:30.644 Could not set queue depth (nvme0n2) 00:34:30.644 Could not set queue depth (nvme0n3) 00:34:30.644 Could not set queue depth (nvme0n4) 00:34:30.904 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:30.904 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:30.904 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:30.904 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:30.904 fio-3.35 00:34:30.904 Starting 4 threads 00:34:32.297 00:34:32.297 job0: (groupid=0, jobs=1): err= 0: pid=4052805: Wed Nov 6 15:46:50 2024 00:34:32.297 read: IOPS=4539, BW=17.7MiB/s (18.6MB/s)(18.0MiB/1015msec) 00:34:32.297 slat (nsec): min=974, max=17111k, avg=104289.28, stdev=823822.68 00:34:32.297 clat (usec): min=3679, max=63305, avg=13026.19, stdev=8901.39 00:34:32.297 lat (usec): min=3685, max=63314, avg=13130.48, stdev=8978.97 00:34:32.297 clat percentiles (usec): 00:34:32.297 | 1.00th=[ 5145], 5.00th=[ 5800], 10.00th=[ 6063], 20.00th=[ 6783], 00:34:32.297 | 30.00th=[ 7701], 40.00th=[ 9110], 50.00th=[10945], 60.00th=[12125], 00:34:32.297 | 70.00th=[13042], 80.00th=[16909], 90.00th=[23987], 95.00th=[26608], 00:34:32.297 | 99.00th=[50594], 99.50th=[58459], 99.90th=[63177], 99.95th=[63177], 00:34:32.297 | 99.99th=[63177] 00:34:32.297 write: IOPS=4734, BW=18.5MiB/s (19.4MB/s)(18.8MiB/1015msec); 0 zone resets 00:34:32.297 slat (nsec): min=1680, max=11206k, avg=101068.06, stdev=611163.72 00:34:32.297 clat (usec): min=808, max=63259, avg=14270.18, stdev=11301.74 00:34:32.297 lat (usec): min=817, max=63262, avg=14371.25, stdev=11360.36 00:34:32.297 clat percentiles (usec): 00:34:32.297 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 5080], 20.00th=[ 6390], 00:34:32.297 | 30.00th=[ 7046], 40.00th=[ 8717], 50.00th=[ 9634], 60.00th=[11863], 00:34:32.297 | 70.00th=[16450], 80.00th=[19530], 90.00th=[30802], 95.00th=[43254], 00:34:32.297 | 99.00th=[51643], 99.50th=[52691], 99.90th=[54789], 99.95th=[54789], 00:34:32.297 | 99.99th=[63177] 00:34:32.297 bw ( KiB/s): min=16944, max=20480, per=25.91%, avg=18712.00, stdev=2500.33, samples=2 00:34:32.297 iops : min= 4236, max= 5120, avg=4678.00, stdev=625.08, samples=2 00:34:32.297 lat (usec) : 1000=0.03% 00:34:32.297 lat (msec) : 2=0.10%, 4=0.42%, 10=48.90%, 20=33.66%, 50=15.39% 00:34:32.297 lat (msec) : 100=1.50% 00:34:32.297 cpu : usr=4.54%, sys=4.24%, ctx=337, majf=0, minf=1 00:34:32.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:32.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:32.297 issued rwts: total=4608,4806,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.297 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:32.297 job1: (groupid=0, jobs=1): err= 0: pid=4052830: Wed Nov 6 15:46:50 2024 00:34:32.297 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:34:32.297 slat (nsec): min=986, max=14487k, avg=102124.80, stdev=780418.72 00:34:32.297 clat (usec): min=1551, max=64508, avg=12864.27, stdev=7377.56 00:34:32.297 lat (usec): min=1564, max=64515, avg=12966.40, stdev=7451.78 00:34:32.297 clat percentiles (usec): 00:34:32.297 | 1.00th=[ 3851], 5.00th=[ 5997], 10.00th=[ 6325], 20.00th=[ 7111], 00:34:32.297 | 30.00th=[ 8029], 40.00th=[ 9634], 50.00th=[11600], 60.00th=[13304], 00:34:32.297 | 70.00th=[14615], 80.00th=[17433], 90.00th=[20055], 95.00th=[24511], 00:34:32.297 | 99.00th=[40109], 99.50th=[55837], 99.90th=[64750], 99.95th=[64750], 00:34:32.297 | 99.99th=[64750] 00:34:32.297 write: IOPS=4896, BW=19.1MiB/s (20.1MB/s)(19.2MiB/1005msec); 0 zone resets 00:34:32.297 slat (nsec): min=1693, max=10785k, avg=101335.60, stdev=629989.84 00:34:32.297 clat (usec): min=1366, max=73575, avg=13808.37, stdev=11695.87 00:34:32.297 lat (usec): min=1377, max=73587, avg=13909.71, stdev=11771.27 00:34:32.297 clat percentiles (usec): 00:34:32.297 | 1.00th=[ 3163], 5.00th=[ 4228], 10.00th=[ 4883], 20.00th=[ 6259], 00:34:32.297 | 30.00th=[ 7308], 40.00th=[ 8848], 50.00th=[10552], 60.00th=[12518], 00:34:32.297 | 70.00th=[15401], 80.00th=[18220], 90.00th=[22676], 95.00th=[33817], 00:34:32.297 | 99.00th=[67634], 99.50th=[68682], 99.90th=[73925], 99.95th=[73925], 00:34:32.297 | 99.99th=[73925] 00:34:32.297 bw ( KiB/s): min=17864, max=20480, per=26.55%, avg=19172.00, stdev=1849.79, samples=2 00:34:32.297 iops : min= 4466, max= 5120, avg=4793.00, stdev=462.45, samples=2 00:34:32.297 lat (msec) : 2=0.19%, 4=1.79%, 10=41.40%, 20=44.56%, 50=9.90% 00:34:32.297 lat (msec) : 100=2.16% 00:34:32.297 cpu : usr=3.88%, sys=5.88%, ctx=337, majf=0, minf=1 00:34:32.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:32.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:32.297 issued rwts: total=4608,4921,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.297 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:32.297 job2: (groupid=0, jobs=1): err= 0: pid=4052857: Wed Nov 6 15:46:50 2024 00:34:32.297 read: IOPS=3534, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1014msec) 00:34:32.297 slat (nsec): min=1045, max=14543k, avg=106234.45, stdev=822023.44 00:34:32.297 clat (usec): min=3005, max=35464, avg=13551.57, stdev=6604.84 00:34:32.297 lat (usec): min=3394, max=35476, avg=13657.80, stdev=6658.09 00:34:32.297 clat percentiles (usec): 00:34:32.297 | 1.00th=[ 3916], 5.00th=[ 5407], 10.00th=[ 7046], 20.00th=[ 8586], 00:34:32.297 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[11863], 60.00th=[13829], 00:34:32.297 | 70.00th=[15008], 80.00th=[19006], 90.00th=[22938], 95.00th=[27919], 00:34:32.297 | 99.00th=[30802], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:34:32.297 | 99.99th=[35390] 00:34:32.297 write: IOPS=3919, BW=15.3MiB/s (16.1MB/s)(15.5MiB/1014msec); 0 zone resets 00:34:32.297 slat (nsec): min=1688, max=29065k, avg=140517.11, stdev=884896.60 00:34:32.297 clat (usec): min=736, max=69687, avg=19011.92, stdev=15466.80 00:34:32.297 lat (usec): min=800, max=69696, avg=19152.44, stdev=15576.89 00:34:32.297 clat percentiles (usec): 00:34:32.297 | 1.00th=[ 1729], 5.00th=[ 3621], 10.00th=[ 5342], 20.00th=[ 7242], 00:34:32.297 | 30.00th=[ 8455], 40.00th=[11338], 50.00th=[15270], 60.00th=[17433], 00:34:32.297 | 70.00th=[20579], 80.00th=[27657], 90.00th=[47449], 95.00th=[55837], 00:34:32.297 | 99.00th=[65799], 99.50th=[67634], 99.90th=[69731], 99.95th=[69731], 00:34:32.297 | 99.99th=[69731] 00:34:32.297 bw ( KiB/s): min=10288, max=20480, per=21.30%, avg=15384.00, stdev=7206.83, samples=2 00:34:32.297 iops : min= 2572, max= 5120, avg=3846.00, stdev=1801.71, samples=2 00:34:32.297 lat (usec) : 750=0.01%, 1000=0.01% 00:34:32.297 lat (msec) : 2=0.78%, 4=3.47%, 10=33.00%, 20=36.76%, 50=21.82% 00:34:32.297 lat (msec) : 100=4.15% 00:34:32.297 cpu : usr=2.57%, sys=4.24%, ctx=381, majf=0, minf=2 00:34:32.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:34:32.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:32.297 issued rwts: total=3584,3974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.297 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:32.297 job3: (groupid=0, jobs=1): err= 0: pid=4052867: Wed Nov 6 15:46:50 2024 00:34:32.298 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:34:32.298 slat (nsec): min=932, max=11741k, avg=77038.99, stdev=574418.22 00:34:32.298 clat (usec): min=2361, max=89198, avg=11175.68, stdev=7747.36 00:34:32.298 lat (usec): min=2368, max=89203, avg=11252.72, stdev=7772.12 00:34:32.298 clat percentiles (usec): 00:34:32.298 | 1.00th=[ 3916], 5.00th=[ 4948], 10.00th=[ 6063], 20.00th=[ 6980], 00:34:32.298 | 30.00th=[ 7439], 40.00th=[ 7963], 50.00th=[ 8848], 60.00th=[ 9765], 00:34:32.298 | 70.00th=[10421], 80.00th=[14615], 90.00th=[19268], 95.00th=[24511], 00:34:32.298 | 99.00th=[43779], 99.50th=[55313], 99.90th=[89654], 99.95th=[89654], 00:34:32.298 | 99.99th=[89654] 00:34:32.298 write: IOPS=4600, BW=18.0MiB/s (18.8MB/s)(18.1MiB/1005msec); 0 zone resets 00:34:32.298 slat (nsec): min=1590, max=13767k, avg=133507.22, stdev=820400.88 00:34:32.298 clat (usec): min=1140, max=126713, avg=16433.06, stdev=21805.83 00:34:32.298 lat (usec): min=1167, max=126720, avg=16566.57, stdev=21960.73 00:34:32.298 clat percentiles (msec): 00:34:32.298 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 7], 00:34:32.298 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:34:32.298 | 70.00th=[ 12], 80.00th=[ 17], 90.00th=[ 46], 95.00th=[ 70], 00:34:32.298 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 127], 99.95th=[ 127], 00:34:32.298 | 99.99th=[ 127] 00:34:32.298 bw ( KiB/s): min=11912, max=24952, per=25.52%, avg=18432.00, stdev=9220.67, samples=2 00:34:32.298 iops : min= 2978, max= 6238, avg=4608.00, stdev=2305.17, samples=2 00:34:32.298 lat (msec) : 2=0.10%, 4=1.48%, 10=62.91%, 20=22.85%, 50=8.15% 00:34:32.298 lat (msec) : 100=3.49%, 250=1.03% 00:34:32.298 cpu : usr=2.39%, sys=3.78%, ctx=417, majf=0, minf=1 00:34:32.298 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:32.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:32.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:32.298 issued rwts: total=4608,4623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:32.298 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:32.298 00:34:32.298 Run status group 0 (all jobs): 00:34:32.298 READ: bw=67.0MiB/s (70.2MB/s), 13.8MiB/s-17.9MiB/s (14.5MB/s-18.8MB/s), io=68.0MiB (71.3MB), run=1005-1015msec 00:34:32.298 WRITE: bw=70.5MiB/s (73.9MB/s), 15.3MiB/s-19.1MiB/s (16.1MB/s-20.1MB/s), io=71.6MiB (75.1MB), run=1005-1015msec 00:34:32.298 00:34:32.298 Disk stats (read/write): 00:34:32.298 nvme0n1: ios=3634/3807, merge=0/0, ticks=44191/50162, in_queue=94353, util=81.76% 00:34:32.298 nvme0n2: ios=4119/4143, merge=0/0, ticks=50359/46152, in_queue=96511, util=96.18% 00:34:32.298 nvme0n3: ios=3108/3359, merge=0/0, ticks=40865/49776, in_queue=90641, util=97.91% 00:34:32.298 nvme0n4: ios=3617/3943, merge=0/0, ticks=17406/35899, in_queue=53305, util=90.01% 00:34:32.298 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:32.298 [global] 00:34:32.298 thread=1 00:34:32.298 invalidate=1 00:34:32.298 rw=randwrite 00:34:32.298 time_based=1 00:34:32.298 runtime=1 00:34:32.298 ioengine=libaio 00:34:32.298 direct=1 00:34:32.298 bs=4096 00:34:32.298 iodepth=128 00:34:32.298 norandommap=0 00:34:32.298 numjobs=1 00:34:32.298 00:34:32.298 verify_dump=1 00:34:32.298 verify_backlog=512 00:34:32.298 verify_state_save=0 00:34:32.298 do_verify=1 00:34:32.298 verify=crc32c-intel 00:34:32.298 [job0] 00:34:32.298 filename=/dev/nvme0n1 00:34:32.298 [job1] 00:34:32.298 filename=/dev/nvme0n2 00:34:32.298 [job2] 00:34:32.298 filename=/dev/nvme0n3 00:34:32.298 [job3] 00:34:32.298 filename=/dev/nvme0n4 00:34:32.298 Could not set queue depth (nvme0n1) 00:34:32.298 Could not set queue depth (nvme0n2) 00:34:32.298 Could not set queue depth (nvme0n3) 00:34:32.298 Could not set queue depth (nvme0n4) 00:34:32.559 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:32.559 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:32.559 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:32.559 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:32.559 fio-3.35 00:34:32.559 Starting 4 threads 00:34:33.948 00:34:33.948 job0: (groupid=0, jobs=1): err= 0: pid=4053294: Wed Nov 6 15:46:51 2024 00:34:33.948 read: IOPS=5613, BW=21.9MiB/s (23.0MB/s)(22.1MiB/1007msec) 00:34:33.948 slat (nsec): min=898, max=13909k, avg=83010.68, stdev=649738.55 00:34:33.948 clat (usec): min=4293, max=38219, avg=10798.67, stdev=5542.03 00:34:33.948 lat (usec): min=4299, max=38246, avg=10881.68, stdev=5599.66 00:34:33.948 clat percentiles (usec): 00:34:33.948 | 1.00th=[ 5473], 5.00th=[ 6325], 10.00th=[ 7111], 20.00th=[ 7570], 00:34:33.948 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8717], 00:34:33.948 | 70.00th=[ 9503], 80.00th=[12649], 90.00th=[19792], 95.00th=[24511], 00:34:33.948 | 99.00th=[27919], 99.50th=[29492], 99.90th=[33817], 99.95th=[33817], 00:34:33.948 | 99.99th=[38011] 00:34:33.948 write: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec); 0 zone resets 00:34:33.948 slat (nsec): min=1510, max=9902.4k, avg=79700.90, stdev=473434.44 00:34:33.948 clat (usec): min=1271, max=37924, avg=10864.07, stdev=5838.42 00:34:33.948 lat (usec): min=1281, max=37947, avg=10943.77, stdev=5880.11 00:34:33.948 clat percentiles (usec): 00:34:33.948 | 1.00th=[ 4752], 5.00th=[ 6456], 10.00th=[ 7046], 20.00th=[ 7570], 00:34:33.948 | 30.00th=[ 7767], 40.00th=[ 7898], 50.00th=[ 8029], 60.00th=[ 8160], 00:34:33.948 | 70.00th=[10945], 80.00th=[14222], 90.00th=[18744], 95.00th=[23725], 00:34:33.948 | 99.00th=[33817], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:34:33.948 | 99.99th=[38011] 00:34:33.948 bw ( KiB/s): min=23672, max=24632, per=23.73%, avg=24152.00, stdev=678.82, samples=2 00:34:33.948 iops : min= 5918, max= 6158, avg=6038.00, stdev=169.71, samples=2 00:34:33.948 lat (msec) : 2=0.06%, 4=0.02%, 10=69.41%, 20=22.57%, 50=7.94% 00:34:33.948 cpu : usr=3.98%, sys=6.56%, ctx=495, majf=0, minf=1 00:34:33.948 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:33.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:33.948 issued rwts: total=5653,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.948 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:33.948 job1: (groupid=0, jobs=1): err= 0: pid=4053302: Wed Nov 6 15:46:51 2024 00:34:33.948 read: IOPS=6102, BW=23.8MiB/s (25.0MB/s)(23.9MiB/1004msec) 00:34:33.948 slat (nsec): min=927, max=8316.8k, avg=76148.03, stdev=529473.14 00:34:33.948 clat (usec): min=1785, max=35733, avg=9508.64, stdev=3429.99 00:34:33.948 lat (usec): min=2960, max=35739, avg=9584.79, stdev=3471.75 00:34:33.948 clat percentiles (usec): 00:34:33.948 | 1.00th=[ 4490], 5.00th=[ 5997], 10.00th=[ 6587], 20.00th=[ 7570], 00:34:33.948 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9241], 00:34:33.948 | 70.00th=[ 9634], 80.00th=[10421], 90.00th=[12518], 95.00th=[15533], 00:34:33.948 | 99.00th=[26346], 99.50th=[28181], 99.90th=[35390], 99.95th=[35914], 00:34:33.948 | 99.99th=[35914] 00:34:33.948 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:34:33.948 slat (nsec): min=1504, max=9226.8k, avg=80760.19, stdev=446213.43 00:34:33.948 clat (usec): min=1142, max=35743, avg=11244.10, stdev=6640.80 00:34:33.948 lat (usec): min=1154, max=35748, avg=11324.86, stdev=6687.73 00:34:33.948 clat percentiles (usec): 00:34:33.948 | 1.00th=[ 3228], 5.00th=[ 4686], 10.00th=[ 5932], 20.00th=[ 6980], 00:34:33.948 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:34:33.948 | 70.00th=[11207], 80.00th=[13304], 90.00th=[23987], 95.00th=[28181], 00:34:33.948 | 99.00th=[31589], 99.50th=[32113], 99.90th=[33817], 99.95th=[33817], 00:34:33.948 | 99.99th=[35914] 00:34:33.948 bw ( KiB/s): min=20480, max=28672, per=24.15%, avg=24576.00, stdev=5792.62, samples=2 00:34:33.948 iops : min= 5120, max= 7168, avg=6144.00, stdev=1448.15, samples=2 00:34:33.948 lat (msec) : 2=0.24%, 4=1.26%, 10=68.36%, 20=22.87%, 50=7.27% 00:34:33.948 cpu : usr=3.79%, sys=6.38%, ctx=517, majf=0, minf=2 00:34:33.948 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:33.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:33.948 issued rwts: total=6127,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.948 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:33.948 job2: (groupid=0, jobs=1): err= 0: pid=4053317: Wed Nov 6 15:46:51 2024 00:34:33.948 read: IOPS=6727, BW=26.3MiB/s (27.6MB/s)(26.3MiB/1002msec) 00:34:33.948 slat (nsec): min=931, max=3103.9k, avg=71918.89, stdev=375174.01 00:34:33.948 clat (usec): min=950, max=13246, avg=9116.45, stdev=993.06 00:34:33.948 lat (usec): min=3219, max=13754, avg=9188.37, stdev=1033.45 00:34:33.948 clat percentiles (usec): 00:34:33.948 | 1.00th=[ 6587], 5.00th=[ 7504], 10.00th=[ 8094], 20.00th=[ 8586], 00:34:33.948 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9372], 00:34:33.948 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[10421], 00:34:33.948 | 99.00th=[11338], 99.50th=[11731], 99.90th=[13042], 99.95th=[13042], 00:34:33.948 | 99.99th=[13304] 00:34:33.948 write: IOPS=7153, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec); 0 zone resets 00:34:33.948 slat (nsec): min=1525, max=9976.8k, avg=68127.62, stdev=375555.24 00:34:33.948 clat (usec): min=1430, max=17532, avg=9101.89, stdev=1314.52 00:34:33.948 lat (usec): min=1445, max=17540, avg=9170.02, stdev=1324.01 00:34:33.948 clat percentiles (usec): 00:34:33.948 | 1.00th=[ 5932], 5.00th=[ 6980], 10.00th=[ 7767], 20.00th=[ 8160], 00:34:33.948 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9372], 00:34:33.948 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10945], 00:34:33.948 | 99.00th=[12911], 99.50th=[17171], 99.90th=[17433], 99.95th=[17433], 00:34:33.948 | 99.99th=[17433] 00:34:33.948 bw ( KiB/s): min=28336, max=28672, per=28.01%, avg=28504.00, stdev=237.59, samples=2 00:34:33.948 iops : min= 7084, max= 7168, avg=7126.00, stdev=59.40, samples=2 00:34:33.948 lat (usec) : 1000=0.01% 00:34:33.948 lat (msec) : 2=0.04%, 4=0.32%, 10=83.46%, 20=16.17% 00:34:33.948 cpu : usr=3.00%, sys=5.49%, ctx=698, majf=0, minf=1 00:34:33.948 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:33.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:33.948 issued rwts: total=6741,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.948 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:33.948 job3: (groupid=0, jobs=1): err= 0: pid=4053323: Wed Nov 6 15:46:51 2024 00:34:33.948 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:34:33.949 slat (nsec): min=976, max=5312.0k, avg=79598.94, stdev=484613.48 00:34:33.949 clat (usec): min=3443, max=38640, avg=10404.73, stdev=4000.59 00:34:33.949 lat (usec): min=3448, max=38827, avg=10484.33, stdev=4019.39 00:34:33.949 clat percentiles (usec): 00:34:33.949 | 1.00th=[ 5997], 5.00th=[ 7111], 10.00th=[ 7963], 20.00th=[ 8848], 00:34:33.949 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[10028], 00:34:33.949 | 70.00th=[10683], 80.00th=[11207], 90.00th=[12649], 95.00th=[12911], 00:34:33.949 | 99.00th=[37487], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:34:33.949 | 99.99th=[38536] 00:34:33.949 write: IOPS=6140, BW=24.0MiB/s (25.2MB/s)(24.1MiB/1004msec); 0 zone resets 00:34:33.949 slat (nsec): min=1600, max=27038k, avg=77768.42, stdev=570981.38 00:34:33.949 clat (usec): min=1233, max=36153, avg=10252.46, stdev=4326.87 00:34:33.949 lat (usec): min=1244, max=37540, avg=10330.23, stdev=4368.74 00:34:33.949 clat percentiles (usec): 00:34:33.949 | 1.00th=[ 5538], 5.00th=[ 7046], 10.00th=[ 7635], 20.00th=[ 7963], 00:34:33.949 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9634], 00:34:33.949 | 70.00th=[10159], 80.00th=[10814], 90.00th=[12518], 95.00th=[22414], 00:34:33.949 | 99.00th=[29230], 99.50th=[30016], 99.90th=[33424], 99.95th=[33424], 00:34:33.949 | 99.99th=[35914] 00:34:33.949 bw ( KiB/s): min=24576, max=24576, per=24.15%, avg=24576.00, stdev= 0.00, samples=2 00:34:33.949 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:34:33.949 lat (msec) : 2=0.05%, 4=0.24%, 10=62.91%, 20=33.16%, 50=3.65% 00:34:33.949 cpu : usr=4.09%, sys=5.18%, ctx=768, majf=0, minf=1 00:34:33.949 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:33.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:33.949 issued rwts: total=6144,6165,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.949 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:33.949 00:34:33.949 Run status group 0 (all jobs): 00:34:33.949 READ: bw=95.7MiB/s (100MB/s), 21.9MiB/s-26.3MiB/s (23.0MB/s-27.6MB/s), io=96.3MiB (101MB), run=1002-1007msec 00:34:33.949 WRITE: bw=99.4MiB/s (104MB/s), 23.8MiB/s-27.9MiB/s (25.0MB/s-29.3MB/s), io=100MiB (105MB), run=1002-1007msec 00:34:33.949 00:34:33.949 Disk stats (read/write): 00:34:33.949 nvme0n1: ios=5170/5175, merge=0/0, ticks=27043/23797, in_queue=50840, util=88.38% 00:34:33.949 nvme0n2: ios=4643/5015, merge=0/0, ticks=32208/45393, in_queue=77601, util=86.86% 00:34:33.949 nvme0n3: ios=5632/6088, merge=0/0, ticks=16773/18027, in_queue=34800, util=88.34% 00:34:33.949 nvme0n4: ios=5155/5503, merge=0/0, ticks=22079/21828, in_queue=43907, util=99.89% 00:34:33.949 15:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:33.949 15:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4053434 00:34:33.949 15:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:33.949 15:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:33.949 [global] 00:34:33.949 thread=1 00:34:33.949 invalidate=1 00:34:33.949 rw=read 00:34:33.949 time_based=1 00:34:33.949 runtime=10 00:34:33.949 ioengine=libaio 00:34:33.949 direct=1 00:34:33.949 bs=4096 00:34:33.949 iodepth=1 00:34:33.949 norandommap=1 00:34:33.949 numjobs=1 00:34:33.949 00:34:33.949 [job0] 00:34:33.949 filename=/dev/nvme0n1 00:34:33.949 [job1] 00:34:33.949 filename=/dev/nvme0n2 00:34:33.949 [job2] 00:34:33.949 filename=/dev/nvme0n3 00:34:33.949 [job3] 00:34:33.949 filename=/dev/nvme0n4 00:34:33.949 Could not set queue depth (nvme0n1) 00:34:33.949 Could not set queue depth (nvme0n2) 00:34:33.949 Could not set queue depth (nvme0n3) 00:34:33.949 Could not set queue depth (nvme0n4) 00:34:34.210 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:34.210 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:34.210 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:34.210 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:34.210 fio-3.35 00:34:34.210 Starting 4 threads 00:34:36.757 15:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:37.018 15:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:37.018 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=7872512, buflen=4096 00:34:37.018 fio: pid=4053781, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:37.278 15:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:37.278 15:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:37.278 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=266240, buflen=4096 00:34:37.278 fio: pid=4053771, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:37.278 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10321920, buflen=4096 00:34:37.278 fio: pid=4053719, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:37.539 15:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:37.539 15:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:37.539 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=303104, buflen=4096 00:34:37.539 fio: pid=4053743, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:37.539 15:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:37.539 15:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:37.539 00:34:37.539 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4053719: Wed Nov 6 15:46:55 2024 00:34:37.539 read: IOPS=863, BW=3452KiB/s (3535kB/s)(9.84MiB/2920msec) 00:34:37.539 slat (usec): min=10, max=29883, avg=51.70, stdev=727.81 00:34:37.539 clat (usec): min=570, max=1522, avg=1091.74, stdev=107.27 00:34:37.539 lat (usec): min=596, max=31073, avg=1143.45, stdev=739.44 00:34:37.539 clat percentiles (usec): 00:34:37.539 | 1.00th=[ 840], 5.00th=[ 914], 10.00th=[ 955], 20.00th=[ 1004], 00:34:37.539 | 30.00th=[ 1037], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1123], 00:34:37.539 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1270], 00:34:37.539 | 99.00th=[ 1352], 99.50th=[ 1385], 99.90th=[ 1434], 99.95th=[ 1467], 00:34:37.539 | 99.99th=[ 1516] 00:34:37.539 bw ( KiB/s): min= 3512, max= 3608, per=60.58%, avg=3574.40, stdev=38.12, samples=5 00:34:37.539 iops : min= 878, max= 902, avg=893.60, stdev= 9.53, samples=5 00:34:37.539 lat (usec) : 750=0.24%, 1000=19.56% 00:34:37.539 lat (msec) : 2=80.17% 00:34:37.539 cpu : usr=0.92%, sys=2.60%, ctx=2526, majf=0, minf=1 00:34:37.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:37.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.539 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.539 issued rwts: total=2521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:37.539 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4053743: Wed Nov 6 15:46:55 2024 00:34:37.539 read: IOPS=24, BW=95.3KiB/s (97.6kB/s)(296KiB/3106msec) 00:34:37.539 slat (usec): min=23, max=12600, avg=394.07, stdev=1990.27 00:34:37.539 clat (usec): min=824, max=42137, avg=41274.14, stdev=4776.67 00:34:37.539 lat (usec): min=861, max=54224, avg=41673.20, stdev=5198.19 00:34:37.539 clat percentiles (usec): 00:34:37.539 | 1.00th=[ 824], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:34:37.539 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:37.539 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:37.539 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:37.539 | 99.99th=[42206] 00:34:37.539 bw ( KiB/s): min= 89, max= 96, per=1.59%, avg=94.83, stdev= 2.86, samples=6 00:34:37.539 iops : min= 22, max= 24, avg=23.67, stdev= 0.82, samples=6 00:34:37.539 lat (usec) : 1000=1.33% 00:34:37.539 lat (msec) : 50=97.33% 00:34:37.539 cpu : usr=0.00%, sys=0.10%, ctx=78, majf=0, minf=2 00:34:37.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:37.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.539 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.539 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:37.539 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4053771: Wed Nov 6 15:46:55 2024 00:34:37.539 read: IOPS=23, BW=94.2KiB/s (96.5kB/s)(260KiB/2759msec) 00:34:37.539 slat (usec): min=24, max=21602, avg=352.76, stdev=2655.88 00:34:37.539 clat (usec): min=40840, max=42193, avg=41748.63, stdev=392.65 00:34:37.539 lat (usec): min=40866, max=63104, avg=42106.42, stdev=2673.94 00:34:37.539 clat percentiles (usec): 00:34:37.539 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:37.539 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:37.539 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:37.539 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:37.539 | 99.99th=[42206] 00:34:37.539 bw ( KiB/s): min= 88, max= 96, per=1.59%, avg=94.40, stdev= 3.58, samples=5 00:34:37.539 iops : min= 22, max= 24, avg=23.60, stdev= 0.89, samples=5 00:34:37.539 lat (msec) : 50=98.48% 00:34:37.539 cpu : usr=0.00%, sys=0.11%, ctx=67, majf=0, minf=2 00:34:37.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:37.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.539 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.539 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:37.539 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4053781: Wed Nov 6 15:46:55 2024 00:34:37.539 read: IOPS=740, BW=2960KiB/s (3031kB/s)(7688KiB/2597msec) 00:34:37.539 slat (nsec): min=6791, max=64281, avg=26874.12, stdev=3508.38 00:34:37.539 clat (usec): min=397, max=42047, avg=1306.89, stdev=3531.97 00:34:37.539 lat (usec): min=424, max=42078, avg=1333.77, stdev=3531.98 00:34:37.539 clat percentiles (usec): 00:34:37.539 | 1.00th=[ 644], 5.00th=[ 766], 10.00th=[ 832], 20.00th=[ 906], 00:34:37.539 | 30.00th=[ 947], 40.00th=[ 979], 50.00th=[ 1004], 60.00th=[ 1029], 00:34:37.539 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:34:37.539 | 99.00th=[ 1270], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:34:37.539 | 99.99th=[42206] 00:34:37.539 bw ( KiB/s): min= 96, max= 3896, per=52.07%, avg=3072.00, stdev=1665.74, samples=5 00:34:37.539 iops : min= 24, max= 974, avg=768.00, stdev=416.43, samples=5 00:34:37.539 lat (usec) : 500=0.21%, 750=4.16%, 1000=42.64% 00:34:37.539 lat (msec) : 2=52.00%, 10=0.16%, 50=0.78% 00:34:37.539 cpu : usr=1.23%, sys=3.04%, ctx=1923, majf=0, minf=2 00:34:37.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:37.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.539 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.539 issued rwts: total=1923,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:37.539 00:34:37.539 Run status group 0 (all jobs): 00:34:37.539 READ: bw=5900KiB/s (6041kB/s), 94.2KiB/s-3452KiB/s (96.5kB/s-3535kB/s), io=17.9MiB (18.8MB), run=2597-3106msec 00:34:37.539 00:34:37.539 Disk stats (read/write): 00:34:37.539 nvme0n1: ios=2425/0, merge=0/0, ticks=2584/0, in_queue=2584, util=91.09% 00:34:37.539 nvme0n2: ios=72/0, merge=0/0, ticks=2972/0, in_queue=2972, util=93.78% 00:34:37.539 nvme0n3: ios=60/0, merge=0/0, ticks=2507/0, in_queue=2507, util=95.60% 00:34:37.539 nvme0n4: ios=1921/0, merge=0/0, ticks=2350/0, in_queue=2350, util=96.32% 00:34:37.801 15:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:37.801 15:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:38.061 15:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:38.061 15:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:38.061 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:38.061 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:38.322 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:38.322 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:38.583 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:38.583 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 4053434 00:34:38.583 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:38.583 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:38.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:38.583 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:38.583 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:34:38.583 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:38.583 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:38.583 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:38.583 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:38.583 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:34:38.583 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:38.583 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:38.583 nvmf hotplug test: fio failed as expected 00:34:38.583 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:38.843 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:38.843 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:38.843 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:38.843 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:38.843 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:38.843 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:38.843 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:38.843 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:38.843 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:38.843 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:38.843 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:38.843 rmmod nvme_tcp 00:34:38.843 rmmod nvme_fabrics 00:34:38.843 rmmod nvme_keyring 00:34:38.843 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:38.844 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:38.844 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:38.844 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 4050259 ']' 00:34:38.844 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 4050259 00:34:38.844 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 4050259 ']' 00:34:38.844 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 4050259 00:34:38.844 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:34:38.844 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:38.844 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4050259 00:34:38.844 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:38.844 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:38.844 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4050259' 00:34:38.844 killing process with pid 4050259 00:34:38.844 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 4050259 00:34:38.844 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 4050259 00:34:39.104 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:39.104 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:39.104 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:39.104 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:39.104 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:39.104 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:39.104 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:39.104 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:39.104 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:39.104 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:39.104 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:39.104 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:41.652 00:34:41.652 real 0m28.381s 00:34:41.652 user 2m15.842s 00:34:41.652 sys 0m12.267s 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:41.652 ************************************ 00:34:41.652 END TEST nvmf_fio_target 00:34:41.652 ************************************ 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:41.652 ************************************ 00:34:41.652 START TEST nvmf_bdevio 00:34:41.652 ************************************ 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:41.652 * Looking for test storage... 00:34:41.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:41.652 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:41.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.653 --rc genhtml_branch_coverage=1 00:34:41.653 --rc genhtml_function_coverage=1 00:34:41.653 --rc genhtml_legend=1 00:34:41.653 --rc geninfo_all_blocks=1 00:34:41.653 --rc geninfo_unexecuted_blocks=1 00:34:41.653 00:34:41.653 ' 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:41.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.653 --rc genhtml_branch_coverage=1 00:34:41.653 --rc genhtml_function_coverage=1 00:34:41.653 --rc genhtml_legend=1 00:34:41.653 --rc geninfo_all_blocks=1 00:34:41.653 --rc geninfo_unexecuted_blocks=1 00:34:41.653 00:34:41.653 ' 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:41.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.653 --rc genhtml_branch_coverage=1 00:34:41.653 --rc genhtml_function_coverage=1 00:34:41.653 --rc genhtml_legend=1 00:34:41.653 --rc geninfo_all_blocks=1 00:34:41.653 --rc geninfo_unexecuted_blocks=1 00:34:41.653 00:34:41.653 ' 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:41.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.653 --rc genhtml_branch_coverage=1 00:34:41.653 --rc genhtml_function_coverage=1 00:34:41.653 --rc genhtml_legend=1 00:34:41.653 --rc geninfo_all_blocks=1 00:34:41.653 --rc geninfo_unexecuted_blocks=1 00:34:41.653 00:34:41.653 ' 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:41.653 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.654 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:41.654 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:41.654 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:41.654 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:49.796 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:49.796 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:49.796 Found net devices under 0000:31:00.0: cvl_0_0 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:49.796 Found net devices under 0000:31:00.1: cvl_0_1 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:49.796 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:49.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:49.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:34:49.796 00:34:49.796 --- 10.0.0.2 ping statistics --- 00:34:49.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:49.797 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:49.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:49.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:34:49.797 00:34:49.797 --- 10.0.0.1 ping statistics --- 00:34:49.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:49.797 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=4058830 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 4058830 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 4058830 ']' 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:49.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:49.797 15:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:49.797 [2024-11-06 15:47:06.981958] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:49.797 [2024-11-06 15:47:06.983444] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:34:49.797 [2024-11-06 15:47:06.983515] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:49.797 [2024-11-06 15:47:07.083843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:49.797 [2024-11-06 15:47:07.135392] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:49.797 [2024-11-06 15:47:07.135439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:49.797 [2024-11-06 15:47:07.135448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:49.797 [2024-11-06 15:47:07.135456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:49.797 [2024-11-06 15:47:07.135462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:49.797 [2024-11-06 15:47:07.137851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:49.797 [2024-11-06 15:47:07.138014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:49.797 [2024-11-06 15:47:07.138153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:49.797 [2024-11-06 15:47:07.138154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:49.797 [2024-11-06 15:47:07.223887] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:49.797 [2024-11-06 15:47:07.225230] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:49.797 [2024-11-06 15:47:07.225349] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:49.797 [2024-11-06 15:47:07.226139] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:49.797 [2024-11-06 15:47:07.226176] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:50.059 [2024-11-06 15:47:07.839015] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:50.059 Malloc0 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:50.059 [2024-11-06 15:47:07.931133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:50.059 { 00:34:50.059 "params": { 00:34:50.059 "name": "Nvme$subsystem", 00:34:50.059 "trtype": "$TEST_TRANSPORT", 00:34:50.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.059 "adrfam": "ipv4", 00:34:50.059 "trsvcid": "$NVMF_PORT", 00:34:50.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.059 "hdgst": ${hdgst:-false}, 00:34:50.059 "ddgst": ${ddgst:-false} 00:34:50.059 }, 00:34:50.059 "method": "bdev_nvme_attach_controller" 00:34:50.059 } 00:34:50.059 EOF 00:34:50.059 )") 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:34:50.059 15:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:50.059 "params": { 00:34:50.059 "name": "Nvme1", 00:34:50.059 "trtype": "tcp", 00:34:50.059 "traddr": "10.0.0.2", 00:34:50.059 "adrfam": "ipv4", 00:34:50.059 "trsvcid": "4420", 00:34:50.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:50.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:50.059 "hdgst": false, 00:34:50.059 "ddgst": false 00:34:50.059 }, 00:34:50.059 "method": "bdev_nvme_attach_controller" 00:34:50.059 }' 00:34:50.059 [2024-11-06 15:47:07.990306] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:34:50.059 [2024-11-06 15:47:07.990365] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4059020 ] 00:34:50.320 [2024-11-06 15:47:08.084216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:50.321 [2024-11-06 15:47:08.140482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:50.321 [2024-11-06 15:47:08.140647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:50.321 [2024-11-06 15:47:08.140647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:50.582 I/O targets: 00:34:50.582 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:50.582 00:34:50.582 00:34:50.582 CUnit - A unit testing framework for C - Version 2.1-3 00:34:50.582 http://cunit.sourceforge.net/ 00:34:50.582 00:34:50.582 00:34:50.582 Suite: bdevio tests on: Nvme1n1 00:34:50.582 Test: blockdev write read block ...passed 00:34:50.582 Test: blockdev write zeroes read block ...passed 00:34:50.582 Test: blockdev write zeroes read no split ...passed 00:34:50.582 Test: blockdev write zeroes read split ...passed 00:34:50.582 Test: blockdev write zeroes read split partial ...passed 00:34:50.582 Test: blockdev reset ...[2024-11-06 15:47:08.467836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:50.582 [2024-11-06 15:47:08.467931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf71c0 (9): Bad file descriptor 00:34:50.582 [2024-11-06 15:47:08.563797] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:50.843 passed 00:34:50.843 Test: blockdev write read 8 blocks ...passed 00:34:50.843 Test: blockdev write read size > 128k ...passed 00:34:50.843 Test: blockdev write read invalid size ...passed 00:34:50.843 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:50.843 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:50.843 Test: blockdev write read max offset ...passed 00:34:50.843 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:50.843 Test: blockdev writev readv 8 blocks ...passed 00:34:50.843 Test: blockdev writev readv 30 x 1block ...passed 00:34:50.843 Test: blockdev writev readv block ...passed 00:34:51.105 Test: blockdev writev readv size > 128k ...passed 00:34:51.105 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:51.105 Test: blockdev comparev and writev ...[2024-11-06 15:47:08.832727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:51.105 [2024-11-06 15:47:08.832779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.105 [2024-11-06 15:47:08.832797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:51.105 [2024-11-06 15:47:08.832806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.105 [2024-11-06 15:47:08.833443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:51.105 [2024-11-06 15:47:08.833457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:51.105 [2024-11-06 15:47:08.833479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:51.105 [2024-11-06 15:47:08.833487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:51.105 [2024-11-06 15:47:08.834018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:51.105 [2024-11-06 15:47:08.834033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:51.105 [2024-11-06 15:47:08.834048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:51.105 [2024-11-06 15:47:08.834057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:51.105 [2024-11-06 15:47:08.834709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:51.105 [2024-11-06 15:47:08.834720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:51.105 [2024-11-06 15:47:08.834733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:51.105 [2024-11-06 15:47:08.834741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:51.105 passed 00:34:51.105 Test: blockdev nvme passthru rw ...passed 00:34:51.105 Test: blockdev nvme passthru vendor specific ...[2024-11-06 15:47:08.919701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:51.105 [2024-11-06 15:47:08.919716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:51.105 [2024-11-06 15:47:08.920112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:51.105 [2024-11-06 15:47:08.920124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:51.105 [2024-11-06 15:47:08.920525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:51.105 [2024-11-06 15:47:08.920536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:51.105 [2024-11-06 15:47:08.920928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:51.105 [2024-11-06 15:47:08.920941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:51.105 passed 00:34:51.105 Test: blockdev nvme admin passthru ...passed 00:34:51.105 Test: blockdev copy ...passed 00:34:51.105 00:34:51.105 Run Summary: Type Total Ran Passed Failed Inactive 00:34:51.105 suites 1 1 n/a 0 0 00:34:51.105 tests 23 23 23 0 0 00:34:51.105 asserts 152 152 152 0 n/a 00:34:51.105 00:34:51.105 Elapsed time = 1.281 seconds 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:51.367 rmmod nvme_tcp 00:34:51.367 rmmod nvme_fabrics 00:34:51.367 rmmod nvme_keyring 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 4058830 ']' 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 4058830 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 4058830 ']' 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 4058830 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4058830 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4058830' 00:34:51.367 killing process with pid 4058830 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 4058830 00:34:51.367 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 4058830 00:34:51.628 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:51.628 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:51.628 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:51.628 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:51.628 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:34:51.628 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:51.628 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:34:51.628 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:51.628 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:51.628 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.628 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:51.628 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:54.183 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:54.183 00:34:54.183 real 0m12.458s 00:34:54.183 user 0m10.154s 00:34:54.183 sys 0m6.499s 00:34:54.183 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:54.183 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:54.183 ************************************ 00:34:54.183 END TEST nvmf_bdevio 00:34:54.183 ************************************ 00:34:54.183 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:54.183 00:34:54.183 real 5m3.531s 00:34:54.183 user 10m17.282s 00:34:54.183 sys 2m7.715s 00:34:54.183 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:54.183 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:54.183 ************************************ 00:34:54.183 END TEST nvmf_target_core_interrupt_mode 00:34:54.183 ************************************ 00:34:54.183 15:47:11 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:54.183 15:47:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:54.183 15:47:11 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:54.183 15:47:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.183 ************************************ 00:34:54.183 START TEST nvmf_interrupt 00:34:54.183 ************************************ 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:54.183 * Looking for test storage... 00:34:54.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:54.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.183 --rc genhtml_branch_coverage=1 00:34:54.183 --rc genhtml_function_coverage=1 00:34:54.183 --rc genhtml_legend=1 00:34:54.183 --rc geninfo_all_blocks=1 00:34:54.183 --rc geninfo_unexecuted_blocks=1 00:34:54.183 00:34:54.183 ' 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:54.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.183 --rc genhtml_branch_coverage=1 00:34:54.183 --rc genhtml_function_coverage=1 00:34:54.183 --rc genhtml_legend=1 00:34:54.183 --rc geninfo_all_blocks=1 00:34:54.183 --rc geninfo_unexecuted_blocks=1 00:34:54.183 00:34:54.183 ' 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:54.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.183 --rc genhtml_branch_coverage=1 00:34:54.183 --rc genhtml_function_coverage=1 00:34:54.183 --rc genhtml_legend=1 00:34:54.183 --rc geninfo_all_blocks=1 00:34:54.183 --rc geninfo_unexecuted_blocks=1 00:34:54.183 00:34:54.183 ' 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:54.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.183 --rc genhtml_branch_coverage=1 00:34:54.183 --rc genhtml_function_coverage=1 00:34:54.183 --rc genhtml_legend=1 00:34:54.183 --rc geninfo_all_blocks=1 00:34:54.183 --rc geninfo_unexecuted_blocks=1 00:34:54.183 00:34:54.183 ' 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:54.183 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:54.184 15:47:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:02.326 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:02.326 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:02.326 Found net devices under 0000:31:00.0: cvl_0_0 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:02.326 Found net devices under 0000:31:00.1: cvl_0_1 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:02.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:02.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:35:02.326 00:35:02.326 --- 10.0.0.2 ping statistics --- 00:35:02.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.326 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:02.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:02.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:35:02.326 00:35:02.326 --- 10.0.0.1 ping statistics --- 00:35:02.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.326 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:02.326 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:02.327 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:02.327 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:02.327 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:02.327 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:02.327 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:02.327 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:02.327 15:47:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:02.327 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:02.327 15:47:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:02.327 15:47:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:02.327 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=4063415 00:35:02.327 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 4063415 00:35:02.327 15:47:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:02.327 15:47:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 4063415 ']' 00:35:02.327 15:47:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:02.327 15:47:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:02.327 15:47:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:02.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:02.327 15:47:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:02.327 15:47:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:02.327 [2024-11-06 15:47:19.534986] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:02.327 [2024-11-06 15:47:19.535984] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:35:02.327 [2024-11-06 15:47:19.536022] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:02.327 [2024-11-06 15:47:19.630127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:02.327 [2024-11-06 15:47:19.665993] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:02.327 [2024-11-06 15:47:19.666023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:02.327 [2024-11-06 15:47:19.666032] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:02.327 [2024-11-06 15:47:19.666043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:02.327 [2024-11-06 15:47:19.666049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:02.327 [2024-11-06 15:47:19.667215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.327 [2024-11-06 15:47:19.667216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.327 [2024-11-06 15:47:19.723433] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:02.327 [2024-11-06 15:47:19.723997] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:02.327 [2024-11-06 15:47:19.724324] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:02.589 5000+0 records in 00:35:02.589 5000+0 records out 00:35:02.589 10240000 bytes (10 MB, 9.8 MiB) copied, 0.019281 s, 531 MB/s 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:02.589 AIO0 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:02.589 [2024-11-06 15:47:20.436162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:02.589 [2024-11-06 15:47:20.480719] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 4063415 0 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4063415 0 idle 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4063415 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4063415 -w 256 00:35:02.589 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4063415 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.28 reactor_0' 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4063415 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.28 reactor_0 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 4063415 1 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4063415 1 idle 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4063415 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4063415 -w 256 00:35:02.851 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4063437 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4063437 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=4063779 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 4063415 0 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 4063415 0 busy 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4063415 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4063415 -w 256 00:35:03.112 15:47:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:03.112 15:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4063415 root 20 0 128.2g 44928 32256 S 6.2 0.0 0:00.29 reactor_0' 00:35:03.112 15:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4063415 root 20 0 128.2g 44928 32256 S 6.2 0.0 0:00.29 reactor_0 00:35:03.112 15:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:03.112 15:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:03.112 15:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:35:03.112 15:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:35:03.112 15:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:03.112 15:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:03.112 15:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4063415 -w 256 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4063415 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.60 reactor_0' 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4063415 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.60 reactor_0 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 4063415 1 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 4063415 1 busy 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4063415 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4063415 -w 256 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4063437 root 20 0 128.2g 44928 32256 R 93.8 0.0 0:01.34 reactor_1' 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4063437 root 20 0 128.2g 44928 32256 R 93.8 0.0 0:01.34 reactor_1 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:04.498 15:47:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 4063779 00:35:14.495 Initializing NVMe Controllers 00:35:14.495 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:14.495 Controller IO queue size 256, less than required. 00:35:14.495 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:14.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:14.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:14.495 Initialization complete. Launching workers. 00:35:14.495 ======================================================== 00:35:14.495 Latency(us) 00:35:14.495 Device Information : IOPS MiB/s Average min max 00:35:14.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19156.45 74.83 13368.43 3755.18 21577.21 00:35:14.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19869.35 77.61 12885.93 7804.80 28507.51 00:35:14.495 ======================================================== 00:35:14.495 Total : 39025.80 152.44 13122.77 3755.18 28507.51 00:35:14.495 00:35:14.495 15:47:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:14.495 15:47:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 4063415 0 00:35:14.495 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4063415 0 idle 00:35:14.495 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4063415 00:35:14.495 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:14.495 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:14.495 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:14.495 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:14.495 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:14.495 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:14.495 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:14.495 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:14.495 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:14.495 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4063415 -w 256 00:35:14.495 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:14.495 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4063415 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.27 reactor_0' 00:35:14.495 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4063415 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.27 reactor_0 00:35:14.495 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:14.495 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:14.495 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 4063415 1 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4063415 1 idle 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4063415 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4063415 -w 256 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4063437 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1' 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4063437 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:14.496 15:47:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:14.496 15:47:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:14.496 15:47:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:35:14.496 15:47:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:35:14.496 15:47:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:35:14.496 15:47:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 4063415 0 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4063415 0 idle 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4063415 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4063415 -w 256 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4063415 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.64 reactor_0' 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4063415 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.64 reactor_0 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:16.409 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:16.410 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:16.410 15:47:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:16.410 15:47:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 4063415 1 00:35:16.410 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4063415 1 idle 00:35:16.410 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4063415 00:35:16.410 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:16.410 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:16.410 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:16.410 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:16.410 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:16.410 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:16.410 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:16.410 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:16.410 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:16.410 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4063415 -w 256 00:35:16.410 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:16.670 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4063437 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:35:16.670 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4063437 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:35:16.670 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:16.670 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:16.670 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:16.670 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:16.670 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:16.670 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:16.670 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:16.670 15:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:16.670 15:47:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:16.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:16.931 rmmod nvme_tcp 00:35:16.931 rmmod nvme_fabrics 00:35:16.931 rmmod nvme_keyring 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 4063415 ']' 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 4063415 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 4063415 ']' 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 4063415 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:16.931 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4063415 00:35:17.192 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:17.192 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:17.192 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4063415' 00:35:17.192 killing process with pid 4063415 00:35:17.192 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 4063415 00:35:17.192 15:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 4063415 00:35:17.192 15:47:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:17.192 15:47:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:17.192 15:47:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:17.192 15:47:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:17.192 15:47:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:17.192 15:47:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:17.192 15:47:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:17.192 15:47:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:17.192 15:47:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:17.192 15:47:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:17.192 15:47:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:17.192 15:47:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.739 15:47:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:19.739 00:35:19.739 real 0m25.488s 00:35:19.739 user 0m39.984s 00:35:19.739 sys 0m9.971s 00:35:19.739 15:47:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:19.739 15:47:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:19.739 ************************************ 00:35:19.739 END TEST nvmf_interrupt 00:35:19.739 ************************************ 00:35:19.739 00:35:19.739 real 30m21.954s 00:35:19.739 user 61m38.390s 00:35:19.739 sys 10m24.931s 00:35:19.739 15:47:37 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:19.739 15:47:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:19.739 ************************************ 00:35:19.739 END TEST nvmf_tcp 00:35:19.739 ************************************ 00:35:19.739 15:47:37 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:35:19.739 15:47:37 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:19.739 15:47:37 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:19.739 15:47:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:19.739 15:47:37 -- common/autotest_common.sh@10 -- # set +x 00:35:19.739 ************************************ 00:35:19.739 START TEST spdkcli_nvmf_tcp 00:35:19.739 ************************************ 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:19.739 * Looking for test storage... 00:35:19.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:19.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.739 --rc genhtml_branch_coverage=1 00:35:19.739 --rc genhtml_function_coverage=1 00:35:19.739 --rc genhtml_legend=1 00:35:19.739 --rc geninfo_all_blocks=1 00:35:19.739 --rc geninfo_unexecuted_blocks=1 00:35:19.739 00:35:19.739 ' 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:19.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.739 --rc genhtml_branch_coverage=1 00:35:19.739 --rc genhtml_function_coverage=1 00:35:19.739 --rc genhtml_legend=1 00:35:19.739 --rc geninfo_all_blocks=1 00:35:19.739 --rc geninfo_unexecuted_blocks=1 00:35:19.739 00:35:19.739 ' 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:19.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.739 --rc genhtml_branch_coverage=1 00:35:19.739 --rc genhtml_function_coverage=1 00:35:19.739 --rc genhtml_legend=1 00:35:19.739 --rc geninfo_all_blocks=1 00:35:19.739 --rc geninfo_unexecuted_blocks=1 00:35:19.739 00:35:19.739 ' 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:19.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.739 --rc genhtml_branch_coverage=1 00:35:19.739 --rc genhtml_function_coverage=1 00:35:19.739 --rc genhtml_legend=1 00:35:19.739 --rc geninfo_all_blocks=1 00:35:19.739 --rc geninfo_unexecuted_blocks=1 00:35:19.739 00:35:19.739 ' 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:19.739 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:19.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=4066963 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 4066963 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 4066963 ']' 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:19.740 15:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:19.740 [2024-11-06 15:47:37.578684] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:35:19.740 [2024-11-06 15:47:37.578764] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4066963 ] 00:35:19.740 [2024-11-06 15:47:37.672186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:20.002 [2024-11-06 15:47:37.726381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:20.002 [2024-11-06 15:47:37.726385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:20.575 15:47:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:20.575 15:47:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:35:20.575 15:47:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:20.575 15:47:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:20.575 15:47:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:20.575 15:47:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:20.575 15:47:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:20.575 15:47:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:20.575 15:47:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:20.575 15:47:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:20.575 15:47:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:20.575 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:20.575 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:20.575 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:20.575 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:20.575 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:20.575 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:20.575 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:20.575 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:20.575 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:20.575 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:20.575 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:20.575 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:20.575 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:20.575 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:20.575 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:20.575 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:20.575 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:20.575 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:20.575 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:20.575 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:20.575 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:20.575 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:20.575 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:20.575 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:20.575 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:20.575 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:20.575 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:20.575 ' 00:35:23.970 [2024-11-06 15:47:41.216765] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:24.911 [2024-11-06 15:47:42.577103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:27.453 [2024-11-06 15:47:45.108139] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:29.363 [2024-11-06 15:47:47.330506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:31.274 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:31.274 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:31.274 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:31.274 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:31.274 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:31.274 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:31.274 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:31.274 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:31.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:31.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:31.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:31.274 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:31.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:31.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:31.274 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:31.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:31.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:31.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:31.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:31.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:31.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:31.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:31.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:31.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:31.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:31.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:31.274 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:31.274 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:31.274 15:47:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:31.274 15:47:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:31.274 15:47:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:31.274 15:47:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:31.274 15:47:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:31.274 15:47:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:31.274 15:47:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:31.274 15:47:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:31.844 15:47:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:31.844 15:47:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:31.844 15:47:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:31.844 15:47:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:31.844 15:47:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:31.844 15:47:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:31.844 15:47:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:31.844 15:47:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:31.844 15:47:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:31.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:31.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:31.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:31.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:31.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:31.844 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:31.844 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:31.844 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:31.844 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:31.844 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:31.844 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:31.844 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:31.844 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:31.844 ' 00:35:38.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:38.427 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:38.427 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:38.427 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:38.427 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:38.427 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:38.427 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:38.427 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:38.427 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:38.427 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:38.427 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:38.427 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:38.427 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:38.427 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 4066963 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 4066963 ']' 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 4066963 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4066963 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4066963' 00:35:38.427 killing process with pid 4066963 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 4066963 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 4066963 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 4066963 ']' 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 4066963 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 4066963 ']' 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 4066963 00:35:38.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (4066963) - No such process 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 4066963 is not found' 00:35:38.427 Process with pid 4066963 is not found 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:38.427 00:35:38.427 real 0m18.236s 00:35:38.427 user 0m40.450s 00:35:38.427 sys 0m0.950s 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:38.427 15:47:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:38.427 ************************************ 00:35:38.427 END TEST spdkcli_nvmf_tcp 00:35:38.427 ************************************ 00:35:38.427 15:47:55 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:38.427 15:47:55 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:38.427 15:47:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:38.427 15:47:55 -- common/autotest_common.sh@10 -- # set +x 00:35:38.427 ************************************ 00:35:38.427 START TEST nvmf_identify_passthru 00:35:38.427 ************************************ 00:35:38.427 15:47:55 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:38.427 * Looking for test storage... 00:35:38.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:38.427 15:47:55 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:38.427 15:47:55 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:35:38.427 15:47:55 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:38.427 15:47:55 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:38.427 15:47:55 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:38.427 15:47:55 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:38.427 15:47:55 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:38.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.427 --rc genhtml_branch_coverage=1 00:35:38.427 --rc genhtml_function_coverage=1 00:35:38.427 --rc genhtml_legend=1 00:35:38.427 --rc geninfo_all_blocks=1 00:35:38.427 --rc geninfo_unexecuted_blocks=1 00:35:38.427 00:35:38.427 ' 00:35:38.427 15:47:55 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:38.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.427 --rc genhtml_branch_coverage=1 00:35:38.427 --rc genhtml_function_coverage=1 00:35:38.427 --rc genhtml_legend=1 00:35:38.427 --rc geninfo_all_blocks=1 00:35:38.427 --rc geninfo_unexecuted_blocks=1 00:35:38.427 00:35:38.427 ' 00:35:38.427 15:47:55 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:38.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.427 --rc genhtml_branch_coverage=1 00:35:38.427 --rc genhtml_function_coverage=1 00:35:38.427 --rc genhtml_legend=1 00:35:38.427 --rc geninfo_all_blocks=1 00:35:38.427 --rc geninfo_unexecuted_blocks=1 00:35:38.427 00:35:38.427 ' 00:35:38.427 15:47:55 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:38.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.427 --rc genhtml_branch_coverage=1 00:35:38.427 --rc genhtml_function_coverage=1 00:35:38.427 --rc genhtml_legend=1 00:35:38.427 --rc geninfo_all_blocks=1 00:35:38.427 --rc geninfo_unexecuted_blocks=1 00:35:38.427 00:35:38.427 ' 00:35:38.427 15:47:55 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:38.427 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:38.427 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:38.427 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:38.427 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:38.427 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:38.427 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:38.427 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:38.427 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:38.427 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:38.427 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:38.427 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:38.427 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:38.427 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:38.427 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:38.427 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:38.427 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:38.427 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:38.428 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:38.428 15:47:55 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:38.428 15:47:55 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:38.428 15:47:55 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:38.428 15:47:55 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:38.428 15:47:55 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.428 15:47:55 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.428 15:47:55 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.428 15:47:55 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:38.428 15:47:55 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.428 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:38.428 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:38.428 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:38.428 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:38.428 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:38.428 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:38.428 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:38.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:38.428 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:38.428 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:38.428 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:38.428 15:47:55 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:38.428 15:47:55 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:38.428 15:47:55 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:38.428 15:47:55 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:38.428 15:47:55 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:38.428 15:47:55 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.428 15:47:55 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.428 15:47:55 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.428 15:47:55 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:38.428 15:47:55 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.428 15:47:55 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:38.428 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:38.428 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:38.428 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:38.428 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:38.428 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:38.428 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.428 15:47:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:38.428 15:47:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.428 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:38.428 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:38.428 15:47:55 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:38.428 15:47:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:45.116 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:45.116 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:45.116 Found net devices under 0000:31:00.0: cvl_0_0 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:45.116 Found net devices under 0000:31:00.1: cvl_0_1 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:45.116 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:45.378 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:45.378 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:45.378 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:45.378 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:45.378 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:45.378 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:45.640 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:45.640 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:45.640 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:45.640 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:45.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:45.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:35:45.640 00:35:45.640 --- 10.0.0.2 ping statistics --- 00:35:45.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.640 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:35:45.640 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:45.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:45.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:35:45.640 00:35:45.640 --- 10.0.0.1 ping statistics --- 00:35:45.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.640 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:35:45.640 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:45.640 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:35:45.640 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:45.640 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:45.640 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:45.640 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:45.640 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:45.640 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:45.640 15:48:03 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:45.640 15:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:45.640 15:48:03 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:45.640 15:48:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:45.640 15:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:45.640 15:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:35:45.640 15:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:35:45.640 15:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:35:45.640 15:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:35:45.640 15:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:35:45.640 15:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:35:45.640 15:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:45.640 15:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:45.640 15:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:35:45.640 15:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:35:45.640 15:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:35:45.640 15:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:35:45.640 15:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:45.640 15:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:45.640 15:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:45.640 15:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:45.640 15:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:46.212 15:48:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605500 00:35:46.212 15:48:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:46.212 15:48:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:46.212 15:48:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:46.785 15:48:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:46.785 15:48:04 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:46.785 15:48:04 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:46.785 15:48:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:46.785 15:48:04 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:46.785 15:48:04 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:46.785 15:48:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:46.785 15:48:04 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=4074517 00:35:46.785 15:48:04 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:46.785 15:48:04 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:46.785 15:48:04 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 4074517 00:35:46.785 15:48:04 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 4074517 ']' 00:35:46.785 15:48:04 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:46.785 15:48:04 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:46.785 15:48:04 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:46.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:46.785 15:48:04 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:46.785 15:48:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:46.785 [2024-11-06 15:48:04.688076] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:35:46.785 [2024-11-06 15:48:04.688150] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:47.047 [2024-11-06 15:48:04.791572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:47.047 [2024-11-06 15:48:04.844869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:47.047 [2024-11-06 15:48:04.844933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:47.047 [2024-11-06 15:48:04.844942] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:47.047 [2024-11-06 15:48:04.844948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:47.047 [2024-11-06 15:48:04.844955] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:47.047 [2024-11-06 15:48:04.847022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:47.047 [2024-11-06 15:48:04.847182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:47.047 [2024-11-06 15:48:04.847344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:47.047 [2024-11-06 15:48:04.847344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:47.618 15:48:05 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:47.618 15:48:05 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:35:47.618 15:48:05 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:47.618 15:48:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.618 15:48:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:47.618 INFO: Log level set to 20 00:35:47.618 INFO: Requests: 00:35:47.618 { 00:35:47.618 "jsonrpc": "2.0", 00:35:47.618 "method": "nvmf_set_config", 00:35:47.618 "id": 1, 00:35:47.618 "params": { 00:35:47.618 "admin_cmd_passthru": { 00:35:47.618 "identify_ctrlr": true 00:35:47.618 } 00:35:47.618 } 00:35:47.618 } 00:35:47.618 00:35:47.618 INFO: response: 00:35:47.618 { 00:35:47.618 "jsonrpc": "2.0", 00:35:47.618 "id": 1, 00:35:47.618 "result": true 00:35:47.618 } 00:35:47.618 00:35:47.618 15:48:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.618 15:48:05 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:47.618 15:48:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.618 15:48:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:47.618 INFO: Setting log level to 20 00:35:47.618 INFO: Setting log level to 20 00:35:47.618 INFO: Log level set to 20 00:35:47.618 INFO: Log level set to 20 00:35:47.618 INFO: Requests: 00:35:47.618 { 00:35:47.618 "jsonrpc": "2.0", 00:35:47.618 "method": "framework_start_init", 00:35:47.618 "id": 1 00:35:47.618 } 00:35:47.618 00:35:47.618 INFO: Requests: 00:35:47.618 { 00:35:47.618 "jsonrpc": "2.0", 00:35:47.618 "method": "framework_start_init", 00:35:47.618 "id": 1 00:35:47.618 } 00:35:47.618 00:35:47.879 [2024-11-06 15:48:05.615667] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:47.879 INFO: response: 00:35:47.879 { 00:35:47.879 "jsonrpc": "2.0", 00:35:47.879 "id": 1, 00:35:47.879 "result": true 00:35:47.879 } 00:35:47.879 00:35:47.879 INFO: response: 00:35:47.879 { 00:35:47.879 "jsonrpc": "2.0", 00:35:47.879 "id": 1, 00:35:47.879 "result": true 00:35:47.879 } 00:35:47.879 00:35:47.879 15:48:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.879 15:48:05 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:47.879 15:48:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.879 15:48:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:47.879 INFO: Setting log level to 40 00:35:47.879 INFO: Setting log level to 40 00:35:47.879 INFO: Setting log level to 40 00:35:47.879 [2024-11-06 15:48:05.629263] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:47.879 15:48:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.879 15:48:05 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:47.879 15:48:05 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:47.879 15:48:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:47.879 15:48:05 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:47.879 15:48:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.879 15:48:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:48.140 Nvme0n1 00:35:48.140 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.140 15:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:48.140 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.140 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:48.140 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.140 15:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:48.140 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.140 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:48.140 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.140 15:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:48.140 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.140 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:48.140 [2024-11-06 15:48:06.035678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:48.140 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.140 15:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:48.140 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.140 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:48.140 [ 00:35:48.140 { 00:35:48.140 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:48.140 "subtype": "Discovery", 00:35:48.140 "listen_addresses": [], 00:35:48.140 "allow_any_host": true, 00:35:48.140 "hosts": [] 00:35:48.140 }, 00:35:48.140 { 00:35:48.140 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:48.140 "subtype": "NVMe", 00:35:48.140 "listen_addresses": [ 00:35:48.140 { 00:35:48.140 "trtype": "TCP", 00:35:48.140 "adrfam": "IPv4", 00:35:48.140 "traddr": "10.0.0.2", 00:35:48.140 "trsvcid": "4420" 00:35:48.140 } 00:35:48.140 ], 00:35:48.140 "allow_any_host": true, 00:35:48.140 "hosts": [], 00:35:48.140 "serial_number": "SPDK00000000000001", 00:35:48.140 "model_number": "SPDK bdev Controller", 00:35:48.140 "max_namespaces": 1, 00:35:48.140 "min_cntlid": 1, 00:35:48.140 "max_cntlid": 65519, 00:35:48.140 "namespaces": [ 00:35:48.140 { 00:35:48.140 "nsid": 1, 00:35:48.140 "bdev_name": "Nvme0n1", 00:35:48.140 "name": "Nvme0n1", 00:35:48.140 "nguid": "36344730526055000025384500000031", 00:35:48.140 "uuid": "36344730-5260-5500-0025-384500000031" 00:35:48.140 } 00:35:48.140 ] 00:35:48.140 } 00:35:48.140 ] 00:35:48.140 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.140 15:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:48.140 15:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:48.140 15:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:48.400 15:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605500 00:35:48.400 15:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:48.400 15:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:48.400 15:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:48.661 15:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:48.661 15:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605500 '!=' S64GNE0R605500 ']' 00:35:48.661 15:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:48.661 15:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:48.661 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.661 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:48.661 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.661 15:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:48.661 15:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:48.661 15:48:06 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:48.661 15:48:06 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:48.661 15:48:06 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:48.661 15:48:06 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:48.661 15:48:06 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:48.661 15:48:06 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:48.661 rmmod nvme_tcp 00:35:48.661 rmmod nvme_fabrics 00:35:48.661 rmmod nvme_keyring 00:35:48.661 15:48:06 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:48.661 15:48:06 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:48.661 15:48:06 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:48.661 15:48:06 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 4074517 ']' 00:35:48.661 15:48:06 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 4074517 00:35:48.661 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 4074517 ']' 00:35:48.661 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 4074517 00:35:48.661 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:35:48.661 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:48.661 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4074517 00:35:48.661 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:48.661 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:48.661 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4074517' 00:35:48.661 killing process with pid 4074517 00:35:48.661 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 4074517 00:35:48.661 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 4074517 00:35:49.232 15:48:06 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:49.232 15:48:06 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:49.232 15:48:06 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:49.232 15:48:06 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:49.232 15:48:06 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:35:49.232 15:48:06 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:49.232 15:48:06 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:35:49.232 15:48:06 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:49.232 15:48:06 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:49.232 15:48:06 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:49.232 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:49.232 15:48:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:51.146 15:48:08 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:51.146 00:35:51.146 real 0m13.408s 00:35:51.146 user 0m10.610s 00:35:51.146 sys 0m6.791s 00:35:51.146 15:48:09 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:51.146 15:48:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:51.146 ************************************ 00:35:51.146 END TEST nvmf_identify_passthru 00:35:51.146 ************************************ 00:35:51.146 15:48:09 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:51.146 15:48:09 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:51.146 15:48:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:51.146 15:48:09 -- common/autotest_common.sh@10 -- # set +x 00:35:51.146 ************************************ 00:35:51.146 START TEST nvmf_dif 00:35:51.146 ************************************ 00:35:51.146 15:48:09 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:51.408 * Looking for test storage... 00:35:51.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:51.408 15:48:09 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:51.408 15:48:09 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:35:51.408 15:48:09 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:51.408 15:48:09 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:51.408 15:48:09 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:51.408 15:48:09 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:51.408 15:48:09 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:51.408 15:48:09 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:51.408 15:48:09 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:51.408 15:48:09 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:51.408 15:48:09 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:51.408 15:48:09 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:51.408 15:48:09 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:51.408 15:48:09 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:51.408 15:48:09 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:51.408 15:48:09 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:51.408 15:48:09 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:51.408 15:48:09 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:51.408 15:48:09 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:51.408 15:48:09 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:51.408 15:48:09 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:51.408 15:48:09 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:51.408 15:48:09 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:51.409 15:48:09 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:51.409 15:48:09 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:51.409 15:48:09 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:51.409 15:48:09 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:51.409 15:48:09 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:51.409 15:48:09 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:51.409 15:48:09 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:51.409 15:48:09 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:51.409 15:48:09 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:51.409 15:48:09 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:51.409 15:48:09 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:51.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.409 --rc genhtml_branch_coverage=1 00:35:51.409 --rc genhtml_function_coverage=1 00:35:51.409 --rc genhtml_legend=1 00:35:51.409 --rc geninfo_all_blocks=1 00:35:51.409 --rc geninfo_unexecuted_blocks=1 00:35:51.409 00:35:51.409 ' 00:35:51.409 15:48:09 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:51.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.409 --rc genhtml_branch_coverage=1 00:35:51.409 --rc genhtml_function_coverage=1 00:35:51.409 --rc genhtml_legend=1 00:35:51.409 --rc geninfo_all_blocks=1 00:35:51.409 --rc geninfo_unexecuted_blocks=1 00:35:51.409 00:35:51.409 ' 00:35:51.409 15:48:09 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:51.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.409 --rc genhtml_branch_coverage=1 00:35:51.409 --rc genhtml_function_coverage=1 00:35:51.409 --rc genhtml_legend=1 00:35:51.409 --rc geninfo_all_blocks=1 00:35:51.409 --rc geninfo_unexecuted_blocks=1 00:35:51.409 00:35:51.409 ' 00:35:51.409 15:48:09 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:51.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.409 --rc genhtml_branch_coverage=1 00:35:51.409 --rc genhtml_function_coverage=1 00:35:51.409 --rc genhtml_legend=1 00:35:51.409 --rc geninfo_all_blocks=1 00:35:51.409 --rc geninfo_unexecuted_blocks=1 00:35:51.409 00:35:51.409 ' 00:35:51.409 15:48:09 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:51.409 15:48:09 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:51.409 15:48:09 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:51.409 15:48:09 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:51.409 15:48:09 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:51.409 15:48:09 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.409 15:48:09 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.409 15:48:09 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.409 15:48:09 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:51.409 15:48:09 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:51.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:51.409 15:48:09 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:51.409 15:48:09 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:51.409 15:48:09 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:51.409 15:48:09 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:51.409 15:48:09 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:51.409 15:48:09 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:51.409 15:48:09 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:51.409 15:48:09 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:51.409 15:48:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:59.560 15:48:16 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:59.560 15:48:16 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:59.560 15:48:16 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:59.560 15:48:16 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:59.560 15:48:16 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:59.560 15:48:16 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:59.560 15:48:16 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:59.560 15:48:16 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:59.560 15:48:16 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:59.561 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:59.561 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:59.561 Found net devices under 0000:31:00.0: cvl_0_0 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:59.561 Found net devices under 0000:31:00.1: cvl_0_1 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:59.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:59.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:35:59.561 00:35:59.561 --- 10.0.0.2 ping statistics --- 00:35:59.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.561 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:59.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:59.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:35:59.561 00:35:59.561 --- 10.0.0.1 ping statistics --- 00:35:59.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.561 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:59.561 15:48:16 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:02.110 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:02.110 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:02.110 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:02.110 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:02.110 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:02.110 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:02.110 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:02.110 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:02.110 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:02.110 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:02.110 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:02.110 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:02.110 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:02.110 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:02.110 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:02.110 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:02.110 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:02.683 15:48:20 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:02.683 15:48:20 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:02.683 15:48:20 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:02.683 15:48:20 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:02.683 15:48:20 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:02.683 15:48:20 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:02.683 15:48:20 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:02.683 15:48:20 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:02.683 15:48:20 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:02.683 15:48:20 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:02.683 15:48:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:02.683 15:48:20 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=4081195 00:36:02.683 15:48:20 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 4081195 00:36:02.683 15:48:20 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:02.683 15:48:20 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 4081195 ']' 00:36:02.683 15:48:20 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:02.683 15:48:20 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:02.683 15:48:20 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:02.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:02.683 15:48:20 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:02.683 15:48:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:02.683 [2024-11-06 15:48:20.501278] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:36:02.683 [2024-11-06 15:48:20.501337] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:02.683 [2024-11-06 15:48:20.601459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:02.683 [2024-11-06 15:48:20.652113] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:02.683 [2024-11-06 15:48:20.652161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:02.683 [2024-11-06 15:48:20.652176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:02.683 [2024-11-06 15:48:20.652183] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:02.683 [2024-11-06 15:48:20.652189] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:02.683 [2024-11-06 15:48:20.652968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:03.626 15:48:21 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:03.626 15:48:21 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:36:03.626 15:48:21 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:03.626 15:48:21 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:03.626 15:48:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:03.626 15:48:21 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:03.626 15:48:21 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:03.626 15:48:21 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:03.626 15:48:21 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.626 15:48:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:03.626 [2024-11-06 15:48:21.379990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:03.626 15:48:21 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.626 15:48:21 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:03.626 15:48:21 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:03.626 15:48:21 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:03.626 15:48:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:03.626 ************************************ 00:36:03.626 START TEST fio_dif_1_default 00:36:03.626 ************************************ 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:03.626 bdev_null0 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:03.626 [2024-11-06 15:48:21.464337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:03.626 { 00:36:03.626 "params": { 00:36:03.626 "name": "Nvme$subsystem", 00:36:03.626 "trtype": "$TEST_TRANSPORT", 00:36:03.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.626 "adrfam": "ipv4", 00:36:03.626 "trsvcid": "$NVMF_PORT", 00:36:03.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.626 "hdgst": ${hdgst:-false}, 00:36:03.626 "ddgst": ${ddgst:-false} 00:36:03.626 }, 00:36:03.626 "method": "bdev_nvme_attach_controller" 00:36:03.626 } 00:36:03.626 EOF 00:36:03.626 )") 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:03.626 "params": { 00:36:03.626 "name": "Nvme0", 00:36:03.626 "trtype": "tcp", 00:36:03.626 "traddr": "10.0.0.2", 00:36:03.626 "adrfam": "ipv4", 00:36:03.626 "trsvcid": "4420", 00:36:03.626 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:03.626 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:03.626 "hdgst": false, 00:36:03.626 "ddgst": false 00:36:03.626 }, 00:36:03.626 "method": "bdev_nvme_attach_controller" 00:36:03.626 }' 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:03.626 15:48:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.197 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:04.197 fio-3.35 00:36:04.197 Starting 1 thread 00:36:16.433 00:36:16.433 filename0: (groupid=0, jobs=1): err= 0: pid=4081737: Wed Nov 6 15:48:32 2024 00:36:16.433 read: IOPS=190, BW=762KiB/s (781kB/s)(7648KiB/10032msec) 00:36:16.433 slat (nsec): min=5490, max=58770, avg=6375.96, stdev=1960.04 00:36:16.433 clat (usec): min=484, max=43019, avg=20969.10, stdev=20227.76 00:36:16.433 lat (usec): min=490, max=43027, avg=20975.48, stdev=20227.74 00:36:16.433 clat percentiles (usec): 00:36:16.433 | 1.00th=[ 562], 5.00th=[ 701], 10.00th=[ 799], 20.00th=[ 824], 00:36:16.433 | 30.00th=[ 840], 40.00th=[ 906], 50.00th=[ 1037], 60.00th=[41157], 00:36:16.433 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:36:16.433 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:36:16.433 | 99.99th=[43254] 00:36:16.433 bw ( KiB/s): min= 704, max= 832, per=100.00%, avg=763.20, stdev=26.01, samples=20 00:36:16.433 iops : min= 176, max= 208, avg=190.80, stdev= 6.50, samples=20 00:36:16.433 lat (usec) : 500=0.26%, 750=5.70%, 1000=42.83% 00:36:16.433 lat (msec) : 2=1.41%, 50=49.79% 00:36:16.433 cpu : usr=93.32%, sys=6.45%, ctx=9, majf=0, minf=242 00:36:16.433 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:16.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.433 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.433 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:16.433 00:36:16.433 Run status group 0 (all jobs): 00:36:16.433 READ: bw=762KiB/s (781kB/s), 762KiB/s-762KiB/s (781kB/s-781kB/s), io=7648KiB (7832kB), run=10032-10032msec 00:36:16.433 15:48:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:16.433 15:48:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:16.433 15:48:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:16.433 15:48:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:16.433 15:48:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:16.433 15:48:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:16.433 15:48:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.433 15:48:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:16.433 15:48:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.433 15:48:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:16.433 15:48:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.433 15:48:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:16.433 15:48:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.433 00:36:16.433 real 0m11.329s 00:36:16.433 user 0m18.332s 00:36:16.433 sys 0m1.167s 00:36:16.433 15:48:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:16.433 15:48:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:16.433 ************************************ 00:36:16.433 END TEST fio_dif_1_default 00:36:16.433 ************************************ 00:36:16.433 15:48:32 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:16.433 15:48:32 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:16.433 15:48:32 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:16.433 15:48:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:16.433 ************************************ 00:36:16.433 START TEST fio_dif_1_multi_subsystems 00:36:16.433 ************************************ 00:36:16.433 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:36:16.433 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:16.433 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:16.433 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:16.433 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:16.433 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:16.434 bdev_null0 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:16.434 [2024-11-06 15:48:32.878407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:16.434 bdev_null1 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:16.434 { 00:36:16.434 "params": { 00:36:16.434 "name": "Nvme$subsystem", 00:36:16.434 "trtype": "$TEST_TRANSPORT", 00:36:16.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:16.434 "adrfam": "ipv4", 00:36:16.434 "trsvcid": "$NVMF_PORT", 00:36:16.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:16.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:16.434 "hdgst": ${hdgst:-false}, 00:36:16.434 "ddgst": ${ddgst:-false} 00:36:16.434 }, 00:36:16.434 "method": "bdev_nvme_attach_controller" 00:36:16.434 } 00:36:16.434 EOF 00:36:16.434 )") 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:16.434 { 00:36:16.434 "params": { 00:36:16.434 "name": "Nvme$subsystem", 00:36:16.434 "trtype": "$TEST_TRANSPORT", 00:36:16.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:16.434 "adrfam": "ipv4", 00:36:16.434 "trsvcid": "$NVMF_PORT", 00:36:16.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:16.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:16.434 "hdgst": ${hdgst:-false}, 00:36:16.434 "ddgst": ${ddgst:-false} 00:36:16.434 }, 00:36:16.434 "method": "bdev_nvme_attach_controller" 00:36:16.434 } 00:36:16.434 EOF 00:36:16.434 )") 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:16.434 "params": { 00:36:16.434 "name": "Nvme0", 00:36:16.434 "trtype": "tcp", 00:36:16.434 "traddr": "10.0.0.2", 00:36:16.434 "adrfam": "ipv4", 00:36:16.434 "trsvcid": "4420", 00:36:16.434 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:16.434 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:16.434 "hdgst": false, 00:36:16.434 "ddgst": false 00:36:16.434 }, 00:36:16.434 "method": "bdev_nvme_attach_controller" 00:36:16.434 },{ 00:36:16.434 "params": { 00:36:16.434 "name": "Nvme1", 00:36:16.434 "trtype": "tcp", 00:36:16.434 "traddr": "10.0.0.2", 00:36:16.434 "adrfam": "ipv4", 00:36:16.434 "trsvcid": "4420", 00:36:16.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:16.434 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:16.434 "hdgst": false, 00:36:16.434 "ddgst": false 00:36:16.434 }, 00:36:16.434 "method": "bdev_nvme_attach_controller" 00:36:16.434 }' 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:16.434 15:48:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:16.434 15:48:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:16.434 15:48:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:16.434 15:48:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:16.434 15:48:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:16.434 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:16.434 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:16.434 fio-3.35 00:36:16.434 Starting 2 threads 00:36:26.427 00:36:26.427 filename0: (groupid=0, jobs=1): err= 0: pid=4083930: Wed Nov 6 15:48:44 2024 00:36:26.428 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10002msec) 00:36:26.428 slat (nsec): min=5480, max=33248, avg=6338.30, stdev=1710.54 00:36:26.428 clat (usec): min=403, max=42282, avg=21082.54, stdev=20157.70 00:36:26.428 lat (usec): min=409, max=42316, avg=21088.88, stdev=20157.66 00:36:26.428 clat percentiles (usec): 00:36:26.428 | 1.00th=[ 611], 5.00th=[ 799], 10.00th=[ 824], 20.00th=[ 840], 00:36:26.428 | 30.00th=[ 857], 40.00th=[ 873], 50.00th=[40633], 60.00th=[41157], 00:36:26.428 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:26.428 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:36:26.428 | 99.99th=[42206] 00:36:26.428 bw ( KiB/s): min= 672, max= 768, per=66.12%, avg=759.58, stdev=23.47, samples=19 00:36:26.428 iops : min= 168, max= 192, avg=189.89, stdev= 5.87, samples=19 00:36:26.428 lat (usec) : 500=0.21%, 750=2.11%, 1000=45.99% 00:36:26.428 lat (msec) : 2=1.48%, 50=50.21% 00:36:26.428 cpu : usr=95.15%, sys=4.64%, ctx=10, majf=0, minf=168 00:36:26.428 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:26.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:26.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:26.428 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:26.428 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:26.428 filename1: (groupid=0, jobs=1): err= 0: pid=4083931: Wed Nov 6 15:48:44 2024 00:36:26.428 read: IOPS=97, BW=391KiB/s (401kB/s)(3920KiB/10021msec) 00:36:26.428 slat (nsec): min=5526, max=31994, avg=6583.95, stdev=2044.99 00:36:26.428 clat (usec): min=806, max=42062, avg=40883.97, stdev=2577.99 00:36:26.428 lat (usec): min=811, max=42068, avg=40890.55, stdev=2578.09 00:36:26.428 clat percentiles (usec): 00:36:26.428 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:26.428 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:26.428 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:36:26.428 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:26.428 | 99.99th=[42206] 00:36:26.428 bw ( KiB/s): min= 384, max= 416, per=33.97%, avg=390.40, stdev=13.13, samples=20 00:36:26.428 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:36:26.428 lat (usec) : 1000=0.41% 00:36:26.428 lat (msec) : 50=99.59% 00:36:26.428 cpu : usr=95.39%, sys=4.40%, ctx=14, majf=0, minf=43 00:36:26.428 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:26.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:26.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:26.428 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:26.428 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:26.428 00:36:26.428 Run status group 0 (all jobs): 00:36:26.428 READ: bw=1148KiB/s (1176kB/s), 391KiB/s-758KiB/s (401kB/s-776kB/s), io=11.2MiB (11.8MB), run=10002-10021msec 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.428 00:36:26.428 real 0m11.413s 00:36:26.428 user 0m35.200s 00:36:26.428 sys 0m1.264s 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:26.428 15:48:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:26.428 ************************************ 00:36:26.428 END TEST fio_dif_1_multi_subsystems 00:36:26.428 ************************************ 00:36:26.428 15:48:44 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:26.428 15:48:44 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:26.428 15:48:44 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:26.428 15:48:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:26.428 ************************************ 00:36:26.428 START TEST fio_dif_rand_params 00:36:26.428 ************************************ 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:26.428 bdev_null0 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:26.428 [2024-11-06 15:48:44.369240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:26.428 { 00:36:26.428 "params": { 00:36:26.428 "name": "Nvme$subsystem", 00:36:26.428 "trtype": "$TEST_TRANSPORT", 00:36:26.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:26.428 "adrfam": "ipv4", 00:36:26.428 "trsvcid": "$NVMF_PORT", 00:36:26.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:26.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:26.428 "hdgst": ${hdgst:-false}, 00:36:26.428 "ddgst": ${ddgst:-false} 00:36:26.428 }, 00:36:26.428 "method": "bdev_nvme_attach_controller" 00:36:26.428 } 00:36:26.428 EOF 00:36:26.428 )") 00:36:26.428 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:26.429 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:26.429 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:26.429 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:26.429 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:26.429 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:26.429 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:26.429 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:26.429 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:26.429 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:26.429 15:48:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:26.429 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:26.429 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:26.429 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:26.429 15:48:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:26.429 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:26.429 15:48:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:26.429 15:48:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:26.429 15:48:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:26.429 "params": { 00:36:26.429 "name": "Nvme0", 00:36:26.429 "trtype": "tcp", 00:36:26.429 "traddr": "10.0.0.2", 00:36:26.429 "adrfam": "ipv4", 00:36:26.429 "trsvcid": "4420", 00:36:26.429 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:26.429 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:26.429 "hdgst": false, 00:36:26.429 "ddgst": false 00:36:26.429 }, 00:36:26.429 "method": "bdev_nvme_attach_controller" 00:36:26.429 }' 00:36:26.689 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:26.689 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:26.689 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:26.689 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:26.689 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:26.689 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:26.689 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:26.689 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:26.689 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:26.689 15:48:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:26.949 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:26.949 ... 00:36:26.949 fio-3.35 00:36:26.949 Starting 3 threads 00:36:33.530 00:36:33.530 filename0: (groupid=0, jobs=1): err= 0: pid=4086208: Wed Nov 6 15:48:50 2024 00:36:33.530 read: IOPS=236, BW=29.5MiB/s (30.9MB/s)(148MiB/5029msec) 00:36:33.530 slat (nsec): min=5476, max=37248, avg=7264.14, stdev=1847.15 00:36:33.530 clat (usec): min=4067, max=90557, avg=12699.84, stdev=14694.79 00:36:33.530 lat (usec): min=4075, max=90563, avg=12707.10, stdev=14694.92 00:36:33.530 clat percentiles (usec): 00:36:33.530 | 1.00th=[ 4883], 5.00th=[ 5473], 10.00th=[ 6063], 20.00th=[ 6652], 00:36:33.530 | 30.00th=[ 7046], 40.00th=[ 7308], 50.00th=[ 7635], 60.00th=[ 8029], 00:36:33.530 | 70.00th=[ 8455], 80.00th=[ 9110], 90.00th=[45876], 95.00th=[48497], 00:36:33.530 | 99.00th=[51643], 99.50th=[87557], 99.90th=[89654], 99.95th=[90702], 00:36:33.530 | 99.99th=[90702] 00:36:33.530 bw ( KiB/s): min=11543, max=45659, per=27.57%, avg=30328.30, stdev=10013.53, samples=10 00:36:33.530 iops : min= 90, max= 356, avg=236.80, stdev=78.13, samples=10 00:36:33.530 lat (msec) : 10=86.10%, 20=1.94%, 50=9.77%, 100=2.19% 00:36:33.530 cpu : usr=94.01%, sys=5.73%, ctx=13, majf=0, minf=84 00:36:33.530 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:33.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.530 issued rwts: total=1187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.530 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:33.530 filename0: (groupid=0, jobs=1): err= 0: pid=4086209: Wed Nov 6 15:48:50 2024 00:36:33.530 read: IOPS=318, BW=39.8MiB/s (41.7MB/s)(199MiB/5009msec) 00:36:33.530 slat (nsec): min=5498, max=36952, avg=6690.68, stdev=1744.68 00:36:33.530 clat (usec): min=3410, max=51279, avg=9410.43, stdev=7283.72 00:36:33.530 lat (usec): min=3419, max=51285, avg=9417.13, stdev=7283.88 00:36:33.530 clat percentiles (usec): 00:36:33.530 | 1.00th=[ 4359], 5.00th=[ 5342], 10.00th=[ 5866], 20.00th=[ 6718], 00:36:33.530 | 30.00th=[ 7308], 40.00th=[ 7767], 50.00th=[ 8225], 60.00th=[ 8717], 00:36:33.530 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10421], 95.00th=[11076], 00:36:33.530 | 99.00th=[47973], 99.50th=[49021], 99.90th=[50594], 99.95th=[51119], 00:36:33.530 | 99.99th=[51119] 00:36:33.530 bw ( KiB/s): min=27648, max=46336, per=37.05%, avg=40755.20, stdev=5969.43, samples=10 00:36:33.530 iops : min= 216, max= 362, avg=318.40, stdev=46.64, samples=10 00:36:33.530 lat (msec) : 4=0.44%, 10=83.95%, 20=12.23%, 50=3.07%, 100=0.31% 00:36:33.530 cpu : usr=93.93%, sys=5.81%, ctx=17, majf=0, minf=98 00:36:33.530 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:33.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.530 issued rwts: total=1595,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.530 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:33.530 filename0: (groupid=0, jobs=1): err= 0: pid=4086210: Wed Nov 6 15:48:50 2024 00:36:33.530 read: IOPS=307, BW=38.5MiB/s (40.4MB/s)(193MiB/5002msec) 00:36:33.530 slat (nsec): min=5550, max=81722, avg=7625.85, stdev=2579.98 00:36:33.530 clat (usec): min=3963, max=86768, avg=9733.84, stdev=7796.85 00:36:33.530 lat (usec): min=3969, max=86777, avg=9741.46, stdev=7797.03 00:36:33.530 clat percentiles (usec): 00:36:33.530 | 1.00th=[ 4490], 5.00th=[ 5342], 10.00th=[ 5932], 20.00th=[ 6652], 00:36:33.530 | 30.00th=[ 7439], 40.00th=[ 7963], 50.00th=[ 8455], 60.00th=[ 8979], 00:36:33.530 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[11338], 95.00th=[12125], 00:36:33.530 | 99.00th=[49021], 99.50th=[50070], 99.90th=[86508], 99.95th=[86508], 00:36:33.530 | 99.99th=[86508] 00:36:33.530 bw ( KiB/s): min=20736, max=50688, per=35.22%, avg=38741.33, stdev=8062.98, samples=9 00:36:33.530 iops : min= 162, max= 396, avg=302.67, stdev=62.99, samples=9 00:36:33.530 lat (msec) : 4=0.13%, 10=73.44%, 20=23.38%, 50=2.53%, 100=0.52% 00:36:33.530 cpu : usr=94.18%, sys=5.56%, ctx=9, majf=0, minf=187 00:36:33.530 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:33.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.530 issued rwts: total=1540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.530 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:33.530 00:36:33.530 Run status group 0 (all jobs): 00:36:33.530 READ: bw=107MiB/s (113MB/s), 29.5MiB/s-39.8MiB/s (30.9MB/s-41.7MB/s), io=540MiB (566MB), run=5002-5029msec 00:36:33.530 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:33.530 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:33.530 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:33.530 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:33.530 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:33.530 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:33.530 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.530 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.530 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.530 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:33.530 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.530 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.530 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.530 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:33.530 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:33.530 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:33.530 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:33.530 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:33.530 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:33.530 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.531 bdev_null0 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.531 [2024-11-06 15:48:50.580358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.531 bdev_null1 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.531 bdev_null2 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:33.531 { 00:36:33.531 "params": { 00:36:33.531 "name": "Nvme$subsystem", 00:36:33.531 "trtype": "$TEST_TRANSPORT", 00:36:33.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:33.531 "adrfam": "ipv4", 00:36:33.531 "trsvcid": "$NVMF_PORT", 00:36:33.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:33.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:33.531 "hdgst": ${hdgst:-false}, 00:36:33.531 "ddgst": ${ddgst:-false} 00:36:33.531 }, 00:36:33.531 "method": "bdev_nvme_attach_controller" 00:36:33.531 } 00:36:33.531 EOF 00:36:33.531 )") 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:33.531 { 00:36:33.531 "params": { 00:36:33.531 "name": "Nvme$subsystem", 00:36:33.531 "trtype": "$TEST_TRANSPORT", 00:36:33.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:33.531 "adrfam": "ipv4", 00:36:33.531 "trsvcid": "$NVMF_PORT", 00:36:33.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:33.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:33.531 "hdgst": ${hdgst:-false}, 00:36:33.531 "ddgst": ${ddgst:-false} 00:36:33.531 }, 00:36:33.531 "method": "bdev_nvme_attach_controller" 00:36:33.531 } 00:36:33.531 EOF 00:36:33.531 )") 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:33.531 15:48:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:33.532 15:48:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:33.532 15:48:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:33.532 { 00:36:33.532 "params": { 00:36:33.532 "name": "Nvme$subsystem", 00:36:33.532 "trtype": "$TEST_TRANSPORT", 00:36:33.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:33.532 "adrfam": "ipv4", 00:36:33.532 "trsvcid": "$NVMF_PORT", 00:36:33.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:33.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:33.532 "hdgst": ${hdgst:-false}, 00:36:33.532 "ddgst": ${ddgst:-false} 00:36:33.532 }, 00:36:33.532 "method": "bdev_nvme_attach_controller" 00:36:33.532 } 00:36:33.532 EOF 00:36:33.532 )") 00:36:33.532 15:48:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:33.532 15:48:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:33.532 15:48:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:33.532 15:48:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:33.532 "params": { 00:36:33.532 "name": "Nvme0", 00:36:33.532 "trtype": "tcp", 00:36:33.532 "traddr": "10.0.0.2", 00:36:33.532 "adrfam": "ipv4", 00:36:33.532 "trsvcid": "4420", 00:36:33.532 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:33.532 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:33.532 "hdgst": false, 00:36:33.532 "ddgst": false 00:36:33.532 }, 00:36:33.532 "method": "bdev_nvme_attach_controller" 00:36:33.532 },{ 00:36:33.532 "params": { 00:36:33.532 "name": "Nvme1", 00:36:33.532 "trtype": "tcp", 00:36:33.532 "traddr": "10.0.0.2", 00:36:33.532 "adrfam": "ipv4", 00:36:33.532 "trsvcid": "4420", 00:36:33.532 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:33.532 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:33.532 "hdgst": false, 00:36:33.532 "ddgst": false 00:36:33.532 }, 00:36:33.532 "method": "bdev_nvme_attach_controller" 00:36:33.532 },{ 00:36:33.532 "params": { 00:36:33.532 "name": "Nvme2", 00:36:33.532 "trtype": "tcp", 00:36:33.532 "traddr": "10.0.0.2", 00:36:33.532 "adrfam": "ipv4", 00:36:33.532 "trsvcid": "4420", 00:36:33.532 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:33.532 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:33.532 "hdgst": false, 00:36:33.532 "ddgst": false 00:36:33.532 }, 00:36:33.532 "method": "bdev_nvme_attach_controller" 00:36:33.532 }' 00:36:33.532 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:33.532 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:33.532 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:33.532 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:33.532 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:33.532 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:33.532 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:33.532 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:33.532 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:33.532 15:48:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:33.532 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:33.532 ... 00:36:33.532 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:33.532 ... 00:36:33.532 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:33.532 ... 00:36:33.532 fio-3.35 00:36:33.532 Starting 24 threads 00:36:45.762 00:36:45.762 filename0: (groupid=0, jobs=1): err= 0: pid=4087629: Wed Nov 6 15:49:02 2024 00:36:45.762 read: IOPS=707, BW=2831KiB/s (2899kB/s)(27.7MiB/10019msec) 00:36:45.762 slat (nsec): min=5659, max=96872, avg=12585.10, stdev=11031.41 00:36:45.762 clat (usec): min=1861, max=43856, avg=22507.52, stdev=5074.89 00:36:45.762 lat (usec): min=1883, max=43870, avg=22520.11, stdev=5075.63 00:36:45.762 clat percentiles (usec): 00:36:45.762 | 1.00th=[ 2442], 5.00th=[13698], 10.00th=[16712], 20.00th=[19530], 00:36:45.762 | 30.00th=[22938], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:36:45.762 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25560], 95.00th=[27657], 00:36:45.762 | 99.00th=[35914], 99.50th=[38011], 99.90th=[42206], 99.95th=[43779], 00:36:45.762 | 99.99th=[43779] 00:36:45.762 bw ( KiB/s): min= 2560, max= 4272, per=4.42%, avg=2834.10, stdev=363.15, samples=20 00:36:45.762 iops : min= 640, max= 1068, avg=708.50, stdev=90.80, samples=20 00:36:45.762 lat (msec) : 2=0.11%, 4=1.75%, 10=1.20%, 20=17.92%, 50=79.02% 00:36:45.762 cpu : usr=98.83%, sys=0.86%, ctx=53, majf=0, minf=74 00:36:45.762 IO depths : 1=1.4%, 2=2.9%, 4=11.4%, 8=72.6%, 16=11.6%, 32=0.0%, >=64=0.0% 00:36:45.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.762 complete : 0=0.0%, 4=89.9%, 8=5.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.762 issued rwts: total=7092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.762 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.762 filename0: (groupid=0, jobs=1): err= 0: pid=4087630: Wed Nov 6 15:49:02 2024 00:36:45.762 read: IOPS=682, BW=2728KiB/s (2794kB/s)(26.7MiB/10017msec) 00:36:45.762 slat (usec): min=5, max=1654, avg=10.19, stdev=20.89 00:36:45.762 clat (usec): min=1959, max=40300, avg=23376.32, stdev=3538.07 00:36:45.762 lat (usec): min=1993, max=40335, avg=23386.52, stdev=3536.57 00:36:45.762 clat percentiles (usec): 00:36:45.762 | 1.00th=[ 2409], 5.00th=[22676], 10.00th=[23200], 20.00th=[23725], 00:36:45.762 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:45.762 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24511], 95.00th=[24773], 00:36:45.762 | 99.00th=[25297], 99.50th=[25560], 99.90th=[38536], 99.95th=[39584], 00:36:45.762 | 99.99th=[40109] 00:36:45.763 bw ( KiB/s): min= 2560, max= 3968, per=4.25%, avg=2725.80, stdev=299.84, samples=20 00:36:45.763 iops : min= 640, max= 992, avg=681.40, stdev=74.97, samples=20 00:36:45.763 lat (msec) : 2=0.04%, 4=1.81%, 10=1.04%, 20=1.05%, 50=96.05% 00:36:45.763 cpu : usr=98.85%, sys=0.87%, ctx=13, majf=0, minf=91 00:36:45.763 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:45.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.763 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.763 issued rwts: total=6832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.763 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.763 filename0: (groupid=0, jobs=1): err= 0: pid=4087631: Wed Nov 6 15:49:02 2024 00:36:45.763 read: IOPS=664, BW=2659KiB/s (2723kB/s)(26.0MiB/10004msec) 00:36:45.763 slat (usec): min=5, max=110, avg=30.48, stdev=15.90 00:36:45.763 clat (usec): min=9572, max=39087, avg=23786.06, stdev=1218.39 00:36:45.763 lat (usec): min=9601, max=39112, avg=23816.53, stdev=1218.25 00:36:45.763 clat percentiles (usec): 00:36:45.763 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:45.763 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:45.763 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24511], 00:36:45.763 | 99.00th=[25297], 99.50th=[26084], 99.90th=[39060], 99.95th=[39060], 00:36:45.763 | 99.99th=[39060] 00:36:45.763 bw ( KiB/s): min= 2560, max= 2688, per=4.12%, avg=2647.58, stdev=61.13, samples=19 00:36:45.763 iops : min= 640, max= 672, avg=661.89, stdev=15.28, samples=19 00:36:45.763 lat (msec) : 10=0.15%, 20=0.48%, 50=99.37% 00:36:45.763 cpu : usr=98.44%, sys=1.03%, ctx=147, majf=0, minf=65 00:36:45.763 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:45.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.763 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.763 issued rwts: total=6650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.763 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.763 filename0: (groupid=0, jobs=1): err= 0: pid=4087632: Wed Nov 6 15:49:02 2024 00:36:45.763 read: IOPS=655, BW=2621KiB/s (2683kB/s)(25.6MiB/10004msec) 00:36:45.763 slat (usec): min=5, max=110, avg=16.19, stdev=14.28 00:36:45.763 clat (usec): min=7537, max=47104, avg=24339.63, stdev=4070.66 00:36:45.763 lat (usec): min=7543, max=47111, avg=24355.82, stdev=4070.71 00:36:45.763 clat percentiles (usec): 00:36:45.763 | 1.00th=[12518], 5.00th=[18220], 10.00th=[21627], 20.00th=[23462], 00:36:45.763 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:36:45.763 | 70.00th=[24249], 80.00th=[24773], 90.00th=[27657], 95.00th=[31851], 00:36:45.763 | 99.00th=[40109], 99.50th=[42730], 99.90th=[44827], 99.95th=[46924], 00:36:45.763 | 99.99th=[46924] 00:36:45.763 bw ( KiB/s): min= 2436, max= 2720, per=4.07%, avg=2611.58, stdev=80.81, samples=19 00:36:45.763 iops : min= 609, max= 680, avg=652.89, stdev=20.20, samples=19 00:36:45.763 lat (msec) : 10=0.38%, 20=6.82%, 50=92.80% 00:36:45.763 cpu : usr=99.03%, sys=0.67%, ctx=10, majf=0, minf=50 00:36:45.763 IO depths : 1=0.4%, 2=0.8%, 4=5.1%, 8=78.2%, 16=15.5%, 32=0.0%, >=64=0.0% 00:36:45.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.763 complete : 0=0.0%, 4=89.6%, 8=7.9%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.763 issued rwts: total=6554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.763 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.763 filename0: (groupid=0, jobs=1): err= 0: pid=4087633: Wed Nov 6 15:49:02 2024 00:36:45.763 read: IOPS=671, BW=2686KiB/s (2751kB/s)(26.3MiB/10009msec) 00:36:45.763 slat (nsec): min=5662, max=82468, avg=9789.05, stdev=7340.63 00:36:45.763 clat (usec): min=8433, max=35355, avg=23741.51, stdev=1981.72 00:36:45.763 lat (usec): min=8455, max=35406, avg=23751.30, stdev=1980.85 00:36:45.763 clat percentiles (usec): 00:36:45.763 | 1.00th=[11731], 5.00th=[22938], 10.00th=[23462], 20.00th=[23725], 00:36:45.763 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:45.763 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:36:45.763 | 99.00th=[26346], 99.50th=[28443], 99.90th=[31589], 99.95th=[35390], 00:36:45.763 | 99.99th=[35390] 00:36:45.763 bw ( KiB/s): min= 2560, max= 2992, per=4.17%, avg=2676.74, stdev=92.97, samples=19 00:36:45.763 iops : min= 640, max= 748, avg=669.16, stdev=23.24, samples=19 00:36:45.763 lat (msec) : 10=0.46%, 20=2.84%, 50=96.70% 00:36:45.763 cpu : usr=99.00%, sys=0.67%, ctx=40, majf=0, minf=56 00:36:45.763 IO depths : 1=5.8%, 2=11.7%, 4=24.0%, 8=51.8%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:45.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.763 complete : 0=0.0%, 4=93.8%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.763 issued rwts: total=6722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.763 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.763 filename0: (groupid=0, jobs=1): err= 0: pid=4087634: Wed Nov 6 15:49:02 2024 00:36:45.763 read: IOPS=682, BW=2729KiB/s (2794kB/s)(26.7MiB/10017msec) 00:36:45.763 slat (usec): min=5, max=101, avg=18.39, stdev=16.83 00:36:45.763 clat (usec): min=7825, max=48229, avg=23321.99, stdev=4623.16 00:36:45.763 lat (usec): min=7840, max=48267, avg=23340.38, stdev=4625.34 00:36:45.763 clat percentiles (usec): 00:36:45.763 | 1.00th=[11469], 5.00th=[14615], 10.00th=[17433], 20.00th=[21103], 00:36:45.763 | 30.00th=[23200], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:45.763 | 70.00th=[24249], 80.00th=[24511], 90.00th=[26608], 95.00th=[30278], 00:36:45.763 | 99.00th=[41681], 99.50th=[43779], 99.90th=[44827], 99.95th=[47973], 00:36:45.763 | 99.99th=[47973] 00:36:45.763 bw ( KiB/s): min= 2560, max= 3120, per=4.25%, avg=2726.90, stdev=138.90, samples=20 00:36:45.763 iops : min= 640, max= 780, avg=681.70, stdev=34.73, samples=20 00:36:45.763 lat (msec) : 10=0.64%, 20=15.53%, 50=83.83% 00:36:45.763 cpu : usr=98.44%, sys=1.09%, ctx=64, majf=0, minf=76 00:36:45.763 IO depths : 1=1.5%, 2=3.5%, 4=12.8%, 8=70.6%, 16=11.5%, 32=0.0%, >=64=0.0% 00:36:45.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.763 complete : 0=0.0%, 4=90.9%, 8=3.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.763 issued rwts: total=6834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.763 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.763 filename0: (groupid=0, jobs=1): err= 0: pid=4087635: Wed Nov 6 15:49:02 2024 00:36:45.763 read: IOPS=665, BW=2662KiB/s (2726kB/s)(26.0MiB/10002msec) 00:36:45.763 slat (nsec): min=5675, max=97414, avg=21339.91, stdev=16848.33 00:36:45.763 clat (usec): min=13691, max=28282, avg=23878.91, stdev=826.88 00:36:45.763 lat (usec): min=13702, max=28289, avg=23900.25, stdev=825.09 00:36:45.763 clat percentiles (usec): 00:36:45.763 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:45.763 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:45.763 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:45.763 | 99.00th=[25297], 99.50th=[25560], 99.90th=[27395], 99.95th=[28181], 00:36:45.763 | 99.99th=[28181] 00:36:45.763 bw ( KiB/s): min= 2560, max= 2688, per=4.15%, avg=2660.68, stdev=52.80, samples=19 00:36:45.763 iops : min= 640, max= 672, avg=665.11, stdev=13.20, samples=19 00:36:45.763 lat (msec) : 20=0.54%, 50=99.46% 00:36:45.763 cpu : usr=98.59%, sys=0.86%, ctx=97, majf=0, minf=55 00:36:45.763 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:45.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.763 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.763 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.763 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.763 filename0: (groupid=0, jobs=1): err= 0: pid=4087636: Wed Nov 6 15:49:02 2024 00:36:45.763 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10004msec) 00:36:45.763 slat (usec): min=5, max=103, avg=30.40, stdev=16.95 00:36:45.763 clat (usec): min=11927, max=29110, avg=23787.91, stdev=877.39 00:36:45.763 lat (usec): min=11966, max=29149, avg=23818.31, stdev=877.03 00:36:45.763 clat percentiles (usec): 00:36:45.763 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:45.763 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:45.763 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:45.763 | 99.00th=[25297], 99.50th=[26084], 99.90th=[26608], 99.95th=[28705], 00:36:45.763 | 99.99th=[29230] 00:36:45.763 bw ( KiB/s): min= 2560, max= 2688, per=4.13%, avg=2653.68, stdev=55.89, samples=19 00:36:45.763 iops : min= 640, max= 672, avg=663.37, stdev=13.95, samples=19 00:36:45.763 lat (msec) : 20=0.54%, 50=99.46% 00:36:45.763 cpu : usr=98.83%, sys=0.73%, ctx=63, majf=0, minf=38 00:36:45.763 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:45.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.763 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.763 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.763 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.763 filename1: (groupid=0, jobs=1): err= 0: pid=4087637: Wed Nov 6 15:49:02 2024 00:36:45.763 read: IOPS=665, BW=2662KiB/s (2726kB/s)(26.0MiB/10001msec) 00:36:45.763 slat (nsec): min=5677, max=90703, avg=19691.75, stdev=15662.50 00:36:45.763 clat (usec): min=13686, max=27808, avg=23882.19, stdev=815.43 00:36:45.763 lat (usec): min=13694, max=27828, avg=23901.89, stdev=814.67 00:36:45.763 clat percentiles (usec): 00:36:45.763 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:45.763 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:45.763 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:45.763 | 99.00th=[25560], 99.50th=[25560], 99.90th=[26608], 99.95th=[27395], 00:36:45.763 | 99.99th=[27919] 00:36:45.763 bw ( KiB/s): min= 2560, max= 2688, per=4.15%, avg=2660.68, stdev=52.80, samples=19 00:36:45.763 iops : min= 640, max= 672, avg=665.11, stdev=13.20, samples=19 00:36:45.763 lat (msec) : 20=0.51%, 50=99.49% 00:36:45.763 cpu : usr=98.87%, sys=0.78%, ctx=73, majf=0, minf=57 00:36:45.763 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:45.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.763 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.763 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.763 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.763 filename1: (groupid=0, jobs=1): err= 0: pid=4087639: Wed Nov 6 15:49:02 2024 00:36:45.763 read: IOPS=665, BW=2662KiB/s (2725kB/s)(26.0MiB/10003msec) 00:36:45.763 slat (usec): min=5, max=116, avg=31.62, stdev=18.52 00:36:45.763 clat (usec): min=9159, max=38437, avg=23726.16, stdev=1318.76 00:36:45.764 lat (usec): min=9175, max=38452, avg=23757.77, stdev=1319.90 00:36:45.764 clat percentiles (usec): 00:36:45.764 | 1.00th=[22414], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:45.764 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:45.764 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:36:45.764 | 99.00th=[25297], 99.50th=[26084], 99.90th=[38536], 99.95th=[38536], 00:36:45.764 | 99.99th=[38536] 00:36:45.764 bw ( KiB/s): min= 2560, max= 2688, per=4.12%, avg=2647.84, stdev=60.74, samples=19 00:36:45.764 iops : min= 640, max= 672, avg=661.95, stdev=15.20, samples=19 00:36:45.764 lat (msec) : 10=0.24%, 20=0.48%, 50=99.28% 00:36:45.764 cpu : usr=98.93%, sys=0.78%, ctx=33, majf=0, minf=50 00:36:45.764 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:45.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.764 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.764 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.764 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.764 filename1: (groupid=0, jobs=1): err= 0: pid=4087640: Wed Nov 6 15:49:02 2024 00:36:45.764 read: IOPS=669, BW=2677KiB/s (2741kB/s)(26.2MiB/10017msec) 00:36:45.764 slat (nsec): min=5659, max=80876, avg=9923.33, stdev=6842.91 00:36:45.764 clat (usec): min=8544, max=25955, avg=23819.57, stdev=1636.43 00:36:45.764 lat (usec): min=8573, max=25962, avg=23829.50, stdev=1634.54 00:36:45.764 clat percentiles (usec): 00:36:45.764 | 1.00th=[11731], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:45.764 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:45.764 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24511], 95.00th=[24773], 00:36:45.764 | 99.00th=[25297], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:36:45.764 | 99.99th=[26084] 00:36:45.764 bw ( KiB/s): min= 2560, max= 2944, per=4.17%, avg=2674.90, stdev=81.97, samples=20 00:36:45.764 iops : min= 640, max= 736, avg=668.70, stdev=20.49, samples=20 00:36:45.764 lat (msec) : 10=0.58%, 20=0.85%, 50=98.57% 00:36:45.764 cpu : usr=98.60%, sys=0.97%, ctx=81, majf=0, minf=78 00:36:45.764 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:45.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.764 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.764 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.764 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.764 filename1: (groupid=0, jobs=1): err= 0: pid=4087641: Wed Nov 6 15:49:02 2024 00:36:45.764 read: IOPS=669, BW=2677KiB/s (2741kB/s)(26.2MiB/10017msec) 00:36:45.764 slat (nsec): min=5686, max=80697, avg=15845.51, stdev=11393.72 00:36:45.764 clat (usec): min=3503, max=25967, avg=23765.25, stdev=1652.11 00:36:45.764 lat (usec): min=3532, max=25973, avg=23781.09, stdev=1651.10 00:36:45.764 clat percentiles (usec): 00:36:45.764 | 1.00th=[12256], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:45.764 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:45.764 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:45.764 | 99.00th=[25297], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:36:45.764 | 99.99th=[26084] 00:36:45.764 bw ( KiB/s): min= 2560, max= 2944, per=4.17%, avg=2674.90, stdev=81.97, samples=20 00:36:45.764 iops : min= 640, max= 736, avg=668.70, stdev=20.49, samples=20 00:36:45.764 lat (msec) : 4=0.10%, 10=0.27%, 20=1.06%, 50=98.57% 00:36:45.764 cpu : usr=98.98%, sys=0.72%, ctx=10, majf=0, minf=51 00:36:45.764 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:45.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.764 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.764 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.764 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.764 filename1: (groupid=0, jobs=1): err= 0: pid=4087642: Wed Nov 6 15:49:02 2024 00:36:45.764 read: IOPS=644, BW=2580KiB/s (2642kB/s)(25.2MiB/10003msec) 00:36:45.764 slat (nsec): min=5650, max=98518, avg=17181.31, stdev=14828.16 00:36:45.764 clat (usec): min=2992, max=70131, avg=24722.15, stdev=4754.18 00:36:45.764 lat (usec): min=2998, max=70147, avg=24739.33, stdev=4754.19 00:36:45.764 clat percentiles (usec): 00:36:45.764 | 1.00th=[12256], 5.00th=[17695], 10.00th=[20841], 20.00th=[23462], 00:36:45.764 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:36:45.764 | 70.00th=[24511], 80.00th=[25822], 90.00th=[30016], 95.00th=[32900], 00:36:45.764 | 99.00th=[39584], 99.50th=[43779], 99.90th=[61604], 99.95th=[61604], 00:36:45.764 | 99.99th=[69731] 00:36:45.764 bw ( KiB/s): min= 2376, max= 2752, per=4.01%, avg=2574.05, stdev=92.79, samples=19 00:36:45.764 iops : min= 594, max= 688, avg=643.47, stdev=23.24, samples=19 00:36:45.764 lat (msec) : 4=0.06%, 10=0.42%, 20=7.55%, 50=91.72%, 100=0.25% 00:36:45.764 cpu : usr=98.44%, sys=1.11%, ctx=182, majf=0, minf=53 00:36:45.764 IO depths : 1=0.8%, 2=1.7%, 4=6.4%, 8=76.5%, 16=14.7%, 32=0.0%, >=64=0.0% 00:36:45.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.764 complete : 0=0.0%, 4=89.8%, 8=7.4%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.764 issued rwts: total=6451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.764 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.764 filename1: (groupid=0, jobs=1): err= 0: pid=4087643: Wed Nov 6 15:49:02 2024 00:36:45.764 read: IOPS=675, BW=2703KiB/s (2768kB/s)(26.4MiB/10009msec) 00:36:45.764 slat (nsec): min=5668, max=96837, avg=17294.61, stdev=12867.53 00:36:45.764 clat (usec): min=7645, max=43609, avg=23525.55, stdev=2390.25 00:36:45.764 lat (usec): min=7687, max=43642, avg=23542.85, stdev=2390.46 00:36:45.764 clat percentiles (usec): 00:36:45.764 | 1.00th=[11863], 5.00th=[19268], 10.00th=[23200], 20.00th=[23462], 00:36:45.764 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:45.764 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:45.764 | 99.00th=[26870], 99.50th=[30016], 99.90th=[39060], 99.95th=[43779], 00:36:45.764 | 99.99th=[43779] 00:36:45.764 bw ( KiB/s): min= 2560, max= 2944, per=4.21%, avg=2699.47, stdev=82.14, samples=19 00:36:45.764 iops : min= 640, max= 736, avg=674.84, stdev=20.54, samples=19 00:36:45.764 lat (msec) : 10=0.92%, 20=4.15%, 50=94.93% 00:36:45.764 cpu : usr=98.41%, sys=1.03%, ctx=131, majf=0, minf=76 00:36:45.764 IO depths : 1=5.7%, 2=11.6%, 4=23.9%, 8=52.1%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:45.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.764 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.764 issued rwts: total=6764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.764 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.764 filename1: (groupid=0, jobs=1): err= 0: pid=4087644: Wed Nov 6 15:49:02 2024 00:36:45.764 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10004msec) 00:36:45.764 slat (nsec): min=5922, max=95335, avg=29440.78, stdev=14556.47 00:36:45.764 clat (usec): min=8105, max=38509, avg=23780.76, stdev=1320.79 00:36:45.764 lat (usec): min=8111, max=38528, avg=23810.20, stdev=1321.28 00:36:45.764 clat percentiles (usec): 00:36:45.764 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:45.764 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:45.764 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:45.764 | 99.00th=[25297], 99.50th=[26084], 99.90th=[38536], 99.95th=[38536], 00:36:45.764 | 99.99th=[38536] 00:36:45.764 bw ( KiB/s): min= 2560, max= 2688, per=4.12%, avg=2647.58, stdev=61.13, samples=19 00:36:45.764 iops : min= 640, max= 672, avg=661.89, stdev=15.28, samples=19 00:36:45.764 lat (msec) : 10=0.24%, 20=0.51%, 50=99.25% 00:36:45.764 cpu : usr=98.76%, sys=0.89%, ctx=108, majf=0, minf=51 00:36:45.764 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:45.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.764 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.764 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.764 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.764 filename1: (groupid=0, jobs=1): err= 0: pid=4087645: Wed Nov 6 15:49:02 2024 00:36:45.764 read: IOPS=677, BW=2711KiB/s (2776kB/s)(26.5MiB/10002msec) 00:36:45.764 slat (usec): min=5, max=102, avg=18.70, stdev=15.86 00:36:45.764 clat (usec): min=5108, max=42827, avg=23499.13, stdev=3962.87 00:36:45.764 lat (usec): min=5113, max=42834, avg=23517.83, stdev=3964.13 00:36:45.764 clat percentiles (usec): 00:36:45.764 | 1.00th=[10945], 5.00th=[15795], 10.00th=[19792], 20.00th=[22938], 00:36:45.764 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:45.764 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25560], 95.00th=[28967], 00:36:45.764 | 99.00th=[38536], 99.50th=[41157], 99.90th=[42206], 99.95th=[42730], 00:36:45.764 | 99.99th=[42730] 00:36:45.764 bw ( KiB/s): min= 2496, max= 2880, per=4.21%, avg=2702.00, stdev=92.25, samples=19 00:36:45.764 iops : min= 624, max= 720, avg=675.47, stdev=23.06, samples=19 00:36:45.764 lat (msec) : 10=0.75%, 20=10.09%, 50=89.16% 00:36:45.764 cpu : usr=98.73%, sys=0.96%, ctx=19, majf=0, minf=70 00:36:45.764 IO depths : 1=1.1%, 2=2.4%, 4=8.8%, 8=73.9%, 16=13.9%, 32=0.0%, >=64=0.0% 00:36:45.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.764 complete : 0=0.0%, 4=90.3%, 8=6.4%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.764 issued rwts: total=6778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.764 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.764 filename2: (groupid=0, jobs=1): err= 0: pid=4087646: Wed Nov 6 15:49:02 2024 00:36:45.764 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10004msec) 00:36:45.764 slat (nsec): min=5745, max=88712, avg=19495.75, stdev=13139.36 00:36:45.764 clat (usec): min=13683, max=29272, avg=23901.41, stdev=828.28 00:36:45.764 lat (usec): min=13692, max=29291, avg=23920.91, stdev=827.51 00:36:45.764 clat percentiles (usec): 00:36:45.764 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:45.764 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:45.764 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:45.764 | 99.00th=[25560], 99.50th=[26346], 99.90th=[27657], 99.95th=[27919], 00:36:45.764 | 99.99th=[29230] 00:36:45.764 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2659.79, stdev=53.00, samples=19 00:36:45.764 iops : min= 640, max= 672, avg=664.84, stdev=13.20, samples=19 00:36:45.764 lat (msec) : 20=0.54%, 50=99.46% 00:36:45.764 cpu : usr=99.00%, sys=0.70%, ctx=28, majf=0, minf=48 00:36:45.764 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:45.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.764 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.764 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.765 filename2: (groupid=0, jobs=1): err= 0: pid=4087647: Wed Nov 6 15:49:02 2024 00:36:45.765 read: IOPS=659, BW=2639KiB/s (2703kB/s)(25.8MiB/10003msec) 00:36:45.765 slat (nsec): min=5664, max=78009, avg=15472.61, stdev=11631.60 00:36:45.765 clat (usec): min=3394, max=42491, avg=24151.77, stdev=3130.88 00:36:45.765 lat (usec): min=3400, max=42513, avg=24167.24, stdev=3131.12 00:36:45.765 clat percentiles (usec): 00:36:45.765 | 1.00th=[14222], 5.00th=[20579], 10.00th=[23200], 20.00th=[23462], 00:36:45.765 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:45.765 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25297], 95.00th=[28181], 00:36:45.765 | 99.00th=[38011], 99.50th=[39060], 99.90th=[42206], 99.95th=[42206], 00:36:45.765 | 99.99th=[42730] 00:36:45.765 bw ( KiB/s): min= 2528, max= 2736, per=4.10%, avg=2632.11, stdev=62.99, samples=19 00:36:45.765 iops : min= 632, max= 684, avg=658.00, stdev=15.75, samples=19 00:36:45.765 lat (msec) : 4=0.15%, 10=0.18%, 20=3.95%, 50=95.71% 00:36:45.765 cpu : usr=98.43%, sys=1.18%, ctx=92, majf=0, minf=51 00:36:45.765 IO depths : 1=1.5%, 2=3.6%, 4=10.7%, 8=70.3%, 16=13.9%, 32=0.0%, >=64=0.0% 00:36:45.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.765 complete : 0=0.0%, 4=91.4%, 8=5.5%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.765 issued rwts: total=6600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.765 filename2: (groupid=0, jobs=1): err= 0: pid=4087648: Wed Nov 6 15:49:02 2024 00:36:45.765 read: IOPS=664, BW=2659KiB/s (2723kB/s)(26.0MiB/10011msec) 00:36:45.765 slat (nsec): min=5667, max=55956, avg=9574.07, stdev=4852.29 00:36:45.765 clat (usec): min=9014, max=37008, avg=23976.13, stdev=1657.84 00:36:45.765 lat (usec): min=9020, max=37025, avg=23985.70, stdev=1657.91 00:36:45.765 clat percentiles (usec): 00:36:45.765 | 1.00th=[16712], 5.00th=[22938], 10.00th=[23200], 20.00th=[23725], 00:36:45.765 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:45.765 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:36:45.765 | 99.00th=[30278], 99.50th=[31327], 99.90th=[36963], 99.95th=[36963], 00:36:45.765 | 99.99th=[36963] 00:36:45.765 bw ( KiB/s): min= 2560, max= 2688, per=4.13%, avg=2653.68, stdev=55.79, samples=19 00:36:45.765 iops : min= 640, max= 672, avg=663.37, stdev=13.92, samples=19 00:36:45.765 lat (msec) : 10=0.03%, 20=2.04%, 50=97.93% 00:36:45.765 cpu : usr=98.39%, sys=1.05%, ctx=147, majf=0, minf=52 00:36:45.765 IO depths : 1=5.1%, 2=11.3%, 4=24.8%, 8=51.4%, 16=7.5%, 32=0.0%, >=64=0.0% 00:36:45.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.765 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.765 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.765 filename2: (groupid=0, jobs=1): err= 0: pid=4087649: Wed Nov 6 15:49:02 2024 00:36:45.765 read: IOPS=665, BW=2662KiB/s (2725kB/s)(26.0MiB/10003msec) 00:36:45.765 slat (usec): min=5, max=111, avg=30.23, stdev=15.93 00:36:45.765 clat (usec): min=9551, max=37347, avg=23782.66, stdev=1293.25 00:36:45.765 lat (usec): min=9557, max=37363, avg=23812.88, stdev=1293.45 00:36:45.765 clat percentiles (usec): 00:36:45.765 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:45.765 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:45.765 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:45.765 | 99.00th=[25297], 99.50th=[26346], 99.90th=[37487], 99.95th=[37487], 00:36:45.765 | 99.99th=[37487] 00:36:45.765 bw ( KiB/s): min= 2554, max= 2688, per=4.13%, avg=2653.68, stdev=59.02, samples=19 00:36:45.765 iops : min= 638, max= 672, avg=663.37, stdev=14.85, samples=19 00:36:45.765 lat (msec) : 10=0.24%, 20=0.54%, 50=99.22% 00:36:45.765 cpu : usr=98.56%, sys=0.98%, ctx=63, majf=0, minf=45 00:36:45.765 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:45.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.765 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.765 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.765 filename2: (groupid=0, jobs=1): err= 0: pid=4087650: Wed Nov 6 15:49:02 2024 00:36:45.765 read: IOPS=671, BW=2687KiB/s (2752kB/s)(26.3MiB/10009msec) 00:36:45.765 slat (usec): min=5, max=113, avg=27.05, stdev=19.02 00:36:45.765 clat (usec): min=10386, max=45091, avg=23585.48, stdev=2683.80 00:36:45.765 lat (usec): min=10398, max=45106, avg=23612.53, stdev=2685.44 00:36:45.765 clat percentiles (usec): 00:36:45.765 | 1.00th=[13698], 5.00th=[18482], 10.00th=[22414], 20.00th=[23200], 00:36:45.765 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:45.765 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25822], 00:36:45.765 | 99.00th=[32375], 99.50th=[36439], 99.90th=[42730], 99.95th=[42730], 00:36:45.765 | 99.99th=[45351] 00:36:45.765 bw ( KiB/s): min= 2560, max= 2992, per=4.19%, avg=2689.05, stdev=101.22, samples=19 00:36:45.765 iops : min= 640, max= 748, avg=672.21, stdev=25.32, samples=19 00:36:45.765 lat (msec) : 20=6.62%, 50=93.38% 00:36:45.765 cpu : usr=98.90%, sys=0.81%, ctx=13, majf=0, minf=69 00:36:45.765 IO depths : 1=3.9%, 2=8.0%, 4=20.4%, 8=59.0%, 16=8.7%, 32=0.0%, >=64=0.0% 00:36:45.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.765 complete : 0=0.0%, 4=93.1%, 8=1.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.765 issued rwts: total=6724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.765 filename2: (groupid=0, jobs=1): err= 0: pid=4087651: Wed Nov 6 15:49:02 2024 00:36:45.765 read: IOPS=665, BW=2662KiB/s (2726kB/s)(26.0MiB/10011msec) 00:36:45.765 slat (nsec): min=5556, max=99200, avg=19831.14, stdev=16098.63 00:36:45.765 clat (usec): min=10040, max=46046, avg=23901.73, stdev=3022.43 00:36:45.765 lat (usec): min=10068, max=46062, avg=23921.56, stdev=3022.79 00:36:45.765 clat percentiles (usec): 00:36:45.765 | 1.00th=[14091], 5.00th=[19530], 10.00th=[22676], 20.00th=[23462], 00:36:45.765 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:45.765 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[27657], 00:36:45.765 | 99.00th=[36963], 99.50th=[39584], 99.90th=[45876], 99.95th=[45876], 00:36:45.765 | 99.99th=[45876] 00:36:45.765 bw ( KiB/s): min= 2432, max= 2816, per=4.14%, avg=2656.21, stdev=91.62, samples=19 00:36:45.765 iops : min= 608, max= 704, avg=664.00, stdev=22.85, samples=19 00:36:45.765 lat (msec) : 20=5.61%, 50=94.39% 00:36:45.765 cpu : usr=98.66%, sys=0.95%, ctx=76, majf=0, minf=70 00:36:45.765 IO depths : 1=2.1%, 2=4.5%, 4=10.8%, 8=69.6%, 16=13.0%, 32=0.0%, >=64=0.0% 00:36:45.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.765 complete : 0=0.0%, 4=89.4%, 8=7.5%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.765 issued rwts: total=6662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.765 filename2: (groupid=0, jobs=1): err= 0: pid=4087652: Wed Nov 6 15:49:02 2024 00:36:45.765 read: IOPS=665, BW=2661KiB/s (2724kB/s)(26.0MiB/10007msec) 00:36:45.765 slat (usec): min=5, max=113, avg=29.92, stdev=16.90 00:36:45.765 clat (usec): min=9120, max=42336, avg=23767.25, stdev=1453.34 00:36:45.765 lat (usec): min=9126, max=42353, avg=23797.17, stdev=1453.68 00:36:45.765 clat percentiles (usec): 00:36:45.765 | 1.00th=[22152], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:45.765 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:45.765 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:45.765 | 99.00th=[25560], 99.50th=[26346], 99.90th=[42206], 99.95th=[42206], 00:36:45.765 | 99.99th=[42206] 00:36:45.765 bw ( KiB/s): min= 2432, max= 2688, per=4.12%, avg=2647.26, stdev=74.38, samples=19 00:36:45.765 iops : min= 608, max= 672, avg=661.79, stdev=18.58, samples=19 00:36:45.765 lat (msec) : 10=0.24%, 20=0.62%, 50=99.14% 00:36:45.765 cpu : usr=98.85%, sys=0.88%, ctx=14, majf=0, minf=42 00:36:45.765 IO depths : 1=5.9%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:45.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.765 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.765 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.765 filename2: (groupid=0, jobs=1): err= 0: pid=4087653: Wed Nov 6 15:49:02 2024 00:36:45.765 read: IOPS=675, BW=2702KiB/s (2766kB/s)(26.4MiB/10015msec) 00:36:45.765 slat (nsec): min=5660, max=97564, avg=15422.10, stdev=11492.27 00:36:45.765 clat (usec): min=7220, max=43929, avg=23551.30, stdev=2691.22 00:36:45.765 lat (usec): min=7237, max=43935, avg=23566.72, stdev=2691.38 00:36:45.765 clat percentiles (usec): 00:36:45.765 | 1.00th=[12256], 5.00th=[18744], 10.00th=[22676], 20.00th=[23462], 00:36:45.765 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:45.765 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:36:45.765 | 99.00th=[30016], 99.50th=[37487], 99.90th=[41681], 99.95th=[43779], 00:36:45.765 | 99.99th=[43779] 00:36:45.765 bw ( KiB/s): min= 2560, max= 2944, per=4.20%, avg=2698.90, stdev=98.00, samples=20 00:36:45.765 iops : min= 640, max= 736, avg=674.70, stdev=24.51, samples=20 00:36:45.765 lat (msec) : 10=0.56%, 20=5.68%, 50=93.76% 00:36:45.765 cpu : usr=99.03%, sys=0.68%, ctx=11, majf=0, minf=42 00:36:45.765 IO depths : 1=4.9%, 2=9.9%, 4=20.8%, 8=56.2%, 16=8.2%, 32=0.0%, >=64=0.0% 00:36:45.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.765 complete : 0=0.0%, 4=93.1%, 8=1.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.765 issued rwts: total=6764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:45.765 00:36:45.765 Run status group 0 (all jobs): 00:36:45.765 READ: bw=62.7MiB/s (65.7MB/s), 2580KiB/s-2831KiB/s (2642kB/s-2899kB/s), io=628MiB (658MB), run=10001-10019msec 00:36:45.765 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:45.765 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:45.765 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:45.765 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:45.765 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:45.765 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:45.765 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.765 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.765 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.766 bdev_null0 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.766 [2024-11-06 15:49:02.348982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.766 bdev_null1 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:45.766 { 00:36:45.766 "params": { 00:36:45.766 "name": "Nvme$subsystem", 00:36:45.766 "trtype": "$TEST_TRANSPORT", 00:36:45.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:45.766 "adrfam": "ipv4", 00:36:45.766 "trsvcid": "$NVMF_PORT", 00:36:45.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:45.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:45.766 "hdgst": ${hdgst:-false}, 00:36:45.766 "ddgst": ${ddgst:-false} 00:36:45.766 }, 00:36:45.766 "method": "bdev_nvme_attach_controller" 00:36:45.766 } 00:36:45.766 EOF 00:36:45.766 )") 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:45.766 { 00:36:45.766 "params": { 00:36:45.766 "name": "Nvme$subsystem", 00:36:45.766 "trtype": "$TEST_TRANSPORT", 00:36:45.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:45.766 "adrfam": "ipv4", 00:36:45.766 "trsvcid": "$NVMF_PORT", 00:36:45.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:45.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:45.766 "hdgst": ${hdgst:-false}, 00:36:45.766 "ddgst": ${ddgst:-false} 00:36:45.766 }, 00:36:45.766 "method": "bdev_nvme_attach_controller" 00:36:45.766 } 00:36:45.766 EOF 00:36:45.766 )") 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:45.766 15:49:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:45.767 15:49:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:45.767 15:49:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:45.767 15:49:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:45.767 15:49:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:45.767 "params": { 00:36:45.767 "name": "Nvme0", 00:36:45.767 "trtype": "tcp", 00:36:45.767 "traddr": "10.0.0.2", 00:36:45.767 "adrfam": "ipv4", 00:36:45.767 "trsvcid": "4420", 00:36:45.767 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:45.767 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:45.767 "hdgst": false, 00:36:45.767 "ddgst": false 00:36:45.767 }, 00:36:45.767 "method": "bdev_nvme_attach_controller" 00:36:45.767 },{ 00:36:45.767 "params": { 00:36:45.767 "name": "Nvme1", 00:36:45.767 "trtype": "tcp", 00:36:45.767 "traddr": "10.0.0.2", 00:36:45.767 "adrfam": "ipv4", 00:36:45.767 "trsvcid": "4420", 00:36:45.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:45.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:45.767 "hdgst": false, 00:36:45.767 "ddgst": false 00:36:45.767 }, 00:36:45.767 "method": "bdev_nvme_attach_controller" 00:36:45.767 }' 00:36:45.767 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:45.767 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:45.767 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:45.767 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:45.767 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:45.767 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:45.767 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:45.767 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:45.767 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:45.767 15:49:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:45.767 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:45.767 ... 00:36:45.767 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:45.767 ... 00:36:45.767 fio-3.35 00:36:45.767 Starting 4 threads 00:36:51.050 00:36:51.050 filename0: (groupid=0, jobs=1): err= 0: pid=4090056: Wed Nov 6 15:49:08 2024 00:36:51.050 read: IOPS=2862, BW=22.4MiB/s (23.4MB/s)(112MiB/5003msec) 00:36:51.050 slat (nsec): min=5486, max=73086, avg=6175.78, stdev=2329.81 00:36:51.050 clat (usec): min=1142, max=5012, avg=2777.91, stdev=294.27 00:36:51.050 lat (usec): min=1160, max=5018, avg=2784.09, stdev=294.17 00:36:51.050 clat percentiles (usec): 00:36:51.050 | 1.00th=[ 2212], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2606], 00:36:51.050 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:36:51.050 | 70.00th=[ 2769], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3228], 00:36:51.050 | 99.00th=[ 4113], 99.50th=[ 4228], 99.90th=[ 4424], 99.95th=[ 4555], 00:36:51.050 | 99.99th=[ 5014] 00:36:51.050 bw ( KiB/s): min=22736, max=23264, per=24.61%, avg=22915.56, stdev=144.39, samples=9 00:36:51.050 iops : min= 2842, max= 2908, avg=2864.44, stdev=18.05, samples=9 00:36:51.050 lat (msec) : 2=0.29%, 4=98.27%, 10=1.45% 00:36:51.050 cpu : usr=96.78%, sys=3.00%, ctx=6, majf=0, minf=67 00:36:51.050 IO depths : 1=0.1%, 2=0.2%, 4=72.2%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.050 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.050 issued rwts: total=14320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.050 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:51.050 filename0: (groupid=0, jobs=1): err= 0: pid=4090057: Wed Nov 6 15:49:08 2024 00:36:51.050 read: IOPS=2857, BW=22.3MiB/s (23.4MB/s)(112MiB/5001msec) 00:36:51.050 slat (nsec): min=5478, max=77624, avg=6057.72, stdev=2012.73 00:36:51.050 clat (usec): min=1203, max=5315, avg=2782.27, stdev=278.28 00:36:51.050 lat (usec): min=1208, max=5339, avg=2788.33, stdev=278.33 00:36:51.050 clat percentiles (usec): 00:36:51.050 | 1.00th=[ 2311], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2638], 00:36:51.050 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:36:51.050 | 70.00th=[ 2769], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3163], 00:36:51.050 | 99.00th=[ 4113], 99.50th=[ 4228], 99.90th=[ 4752], 99.95th=[ 4883], 00:36:51.050 | 99.99th=[ 5276] 00:36:51.050 bw ( KiB/s): min=22509, max=23040, per=24.54%, avg=22849.44, stdev=166.74, samples=9 00:36:51.050 iops : min= 2813, max= 2880, avg=2856.11, stdev=21.00, samples=9 00:36:51.050 lat (msec) : 2=0.18%, 4=98.52%, 10=1.30% 00:36:51.050 cpu : usr=96.16%, sys=3.62%, ctx=8, majf=0, minf=43 00:36:51.050 IO depths : 1=0.1%, 2=0.1%, 4=74.3%, 8=25.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.050 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.050 issued rwts: total=14291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.050 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:51.050 filename1: (groupid=0, jobs=1): err= 0: pid=4090058: Wed Nov 6 15:49:08 2024 00:36:51.050 read: IOPS=2928, BW=22.9MiB/s (24.0MB/s)(114MiB/5002msec) 00:36:51.050 slat (nsec): min=5490, max=67567, avg=6170.62, stdev=2137.29 00:36:51.050 clat (usec): min=1447, max=4309, avg=2716.04, stdev=237.60 00:36:51.050 lat (usec): min=1453, max=4314, avg=2722.22, stdev=237.62 00:36:51.050 clat percentiles (usec): 00:36:51.050 | 1.00th=[ 2057], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2573], 00:36:51.050 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:36:51.050 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2966], 95.00th=[ 2999], 00:36:51.050 | 99.00th=[ 3621], 99.50th=[ 3851], 99.90th=[ 4146], 99.95th=[ 4228], 00:36:51.050 | 99.99th=[ 4293] 00:36:51.050 bw ( KiB/s): min=23184, max=23744, per=25.15%, avg=23425.78, stdev=226.48, samples=9 00:36:51.050 iops : min= 2898, max= 2968, avg=2928.22, stdev=28.31, samples=9 00:36:51.050 lat (msec) : 2=0.53%, 4=99.19%, 10=0.27% 00:36:51.050 cpu : usr=96.28%, sys=3.48%, ctx=7, majf=0, minf=40 00:36:51.050 IO depths : 1=0.1%, 2=0.1%, 4=68.2%, 8=31.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.050 complete : 0=0.0%, 4=95.7%, 8=4.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.050 issued rwts: total=14650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.050 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:51.050 filename1: (groupid=0, jobs=1): err= 0: pid=4090059: Wed Nov 6 15:49:08 2024 00:36:51.050 read: IOPS=2994, BW=23.4MiB/s (24.5MB/s)(117MiB/5001msec) 00:36:51.050 slat (nsec): min=5492, max=51421, avg=5944.78, stdev=1245.49 00:36:51.050 clat (usec): min=1271, max=5057, avg=2655.82, stdev=414.85 00:36:51.050 lat (usec): min=1277, max=5081, avg=2661.76, stdev=414.88 00:36:51.050 clat percentiles (usec): 00:36:51.050 | 1.00th=[ 1860], 5.00th=[ 2073], 10.00th=[ 2212], 20.00th=[ 2343], 00:36:51.050 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2671], 60.00th=[ 2704], 00:36:51.050 | 70.00th=[ 2737], 80.00th=[ 2737], 90.00th=[ 3261], 95.00th=[ 3556], 00:36:51.050 | 99.00th=[ 4015], 99.50th=[ 4113], 99.90th=[ 4359], 99.95th=[ 4752], 00:36:51.050 | 99.99th=[ 5014] 00:36:51.050 bw ( KiB/s): min=23424, max=24336, per=25.77%, avg=24000.00, stdev=320.10, samples=9 00:36:51.050 iops : min= 2928, max= 3042, avg=3000.00, stdev=40.01, samples=9 00:36:51.050 lat (msec) : 2=1.86%, 4=97.12%, 10=1.01% 00:36:51.050 cpu : usr=97.18%, sys=2.56%, ctx=7, majf=0, minf=38 00:36:51.050 IO depths : 1=0.1%, 2=0.5%, 4=70.0%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.050 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.050 issued rwts: total=14976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.050 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:51.050 00:36:51.050 Run status group 0 (all jobs): 00:36:51.050 READ: bw=90.9MiB/s (95.4MB/s), 22.3MiB/s-23.4MiB/s (23.4MB/s-24.5MB/s), io=455MiB (477MB), run=5001-5003msec 00:36:51.050 15:49:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:51.050 15:49:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:51.050 15:49:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:51.050 15:49:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:51.050 15:49:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:51.050 15:49:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:51.050 15:49:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.050 15:49:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.050 15:49:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.050 15:49:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:51.050 15:49:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.050 15:49:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.050 15:49:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.050 15:49:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:51.050 15:49:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:51.051 15:49:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:51.051 15:49:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:51.051 15:49:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.051 15:49:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.051 15:49:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.051 15:49:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:51.051 15:49:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.051 15:49:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.051 15:49:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.051 00:36:51.051 real 0m24.504s 00:36:51.051 user 5m15.995s 00:36:51.051 sys 0m4.826s 00:36:51.051 15:49:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:51.051 15:49:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.051 ************************************ 00:36:51.051 END TEST fio_dif_rand_params 00:36:51.051 ************************************ 00:36:51.051 15:49:08 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:51.051 15:49:08 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:51.051 15:49:08 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:51.051 15:49:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:51.051 ************************************ 00:36:51.051 START TEST fio_dif_digest 00:36:51.051 ************************************ 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:51.051 bdev_null0 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:51.051 [2024-11-06 15:49:08.957288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:51.051 { 00:36:51.051 "params": { 00:36:51.051 "name": "Nvme$subsystem", 00:36:51.051 "trtype": "$TEST_TRANSPORT", 00:36:51.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:51.051 "adrfam": "ipv4", 00:36:51.051 "trsvcid": "$NVMF_PORT", 00:36:51.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:51.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:51.051 "hdgst": ${hdgst:-false}, 00:36:51.051 "ddgst": ${ddgst:-false} 00:36:51.051 }, 00:36:51.051 "method": "bdev_nvme_attach_controller" 00:36:51.051 } 00:36:51.051 EOF 00:36:51.051 )") 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:36:51.051 15:49:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:51.051 "params": { 00:36:51.051 "name": "Nvme0", 00:36:51.051 "trtype": "tcp", 00:36:51.051 "traddr": "10.0.0.2", 00:36:51.051 "adrfam": "ipv4", 00:36:51.051 "trsvcid": "4420", 00:36:51.051 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:51.051 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:51.051 "hdgst": true, 00:36:51.051 "ddgst": true 00:36:51.051 }, 00:36:51.051 "method": "bdev_nvme_attach_controller" 00:36:51.051 }' 00:36:51.051 15:49:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:51.051 15:49:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:51.051 15:49:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:51.051 15:49:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:51.051 15:49:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:51.051 15:49:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:51.332 15:49:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:51.332 15:49:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:51.332 15:49:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:51.332 15:49:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:51.593 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:51.593 ... 00:36:51.593 fio-3.35 00:36:51.593 Starting 3 threads 00:37:03.834 00:37:03.834 filename0: (groupid=0, jobs=1): err= 0: pid=4091354: Wed Nov 6 15:49:20 2024 00:37:03.834 read: IOPS=286, BW=35.8MiB/s (37.5MB/s)(359MiB/10046msec) 00:37:03.834 slat (usec): min=5, max=518, avg= 6.78, stdev= 9.65 00:37:03.834 clat (usec): min=6692, max=53655, avg=10458.94, stdev=1399.10 00:37:03.834 lat (usec): min=6699, max=53662, avg=10465.72, stdev=1399.24 00:37:03.834 clat percentiles (usec): 00:37:03.834 | 1.00th=[ 7963], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9765], 00:37:03.834 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:37:03.834 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:37:03.834 | 99.00th=[12649], 99.50th=[12911], 99.90th=[13829], 99.95th=[47449], 00:37:03.834 | 99.99th=[53740] 00:37:03.834 bw ( KiB/s): min=35584, max=38400, per=32.51%, avg=36774.40, stdev=775.93, samples=20 00:37:03.834 iops : min= 278, max= 300, avg=287.30, stdev= 6.06, samples=20 00:37:03.834 lat (msec) : 10=30.54%, 20=69.39%, 50=0.03%, 100=0.03% 00:37:03.834 cpu : usr=95.42%, sys=4.37%, ctx=17, majf=0, minf=183 00:37:03.834 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:03.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.834 issued rwts: total=2875,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.834 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:03.834 filename0: (groupid=0, jobs=1): err= 0: pid=4091355: Wed Nov 6 15:49:20 2024 00:37:03.834 read: IOPS=318, BW=39.8MiB/s (41.7MB/s)(400MiB/10043msec) 00:37:03.834 slat (nsec): min=5889, max=30812, avg=6608.80, stdev=961.37 00:37:03.834 clat (usec): min=5664, max=52007, avg=9399.85, stdev=2222.01 00:37:03.834 lat (usec): min=5670, max=52013, avg=9406.46, stdev=2222.04 00:37:03.834 clat percentiles (usec): 00:37:03.834 | 1.00th=[ 7504], 5.00th=[ 8029], 10.00th=[ 8291], 20.00th=[ 8586], 00:37:03.834 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9372], 00:37:03.834 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10421], 95.00th=[10945], 00:37:03.834 | 99.00th=[12256], 99.50th=[12911], 99.90th=[51119], 99.95th=[51643], 00:37:03.834 | 99.99th=[52167] 00:37:03.834 bw ( KiB/s): min=35840, max=43520, per=36.16%, avg=40904.70, stdev=2126.63, samples=20 00:37:03.834 iops : min= 280, max= 340, avg=319.55, stdev=16.61, samples=20 00:37:03.834 lat (msec) : 10=82.68%, 20=17.07%, 50=0.06%, 100=0.19% 00:37:03.834 cpu : usr=94.16%, sys=5.60%, ctx=27, majf=0, minf=104 00:37:03.834 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:03.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.834 issued rwts: total=3198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.834 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:03.834 filename0: (groupid=0, jobs=1): err= 0: pid=4091356: Wed Nov 6 15:49:20 2024 00:37:03.834 read: IOPS=280, BW=35.0MiB/s (36.7MB/s)(351MiB/10004msec) 00:37:03.834 slat (nsec): min=5852, max=34482, avg=6567.95, stdev=1224.67 00:37:03.834 clat (usec): min=6099, max=51180, avg=10695.41, stdev=1576.38 00:37:03.834 lat (usec): min=6105, max=51186, avg=10701.97, stdev=1576.38 00:37:03.834 clat percentiles (usec): 00:37:03.834 | 1.00th=[ 8291], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[ 9896], 00:37:03.834 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:37:03.834 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[12125], 00:37:03.834 | 99.00th=[12780], 99.50th=[12911], 99.90th=[50594], 99.95th=[51119], 00:37:03.835 | 99.99th=[51119] 00:37:03.835 bw ( KiB/s): min=32768, max=37888, per=31.70%, avg=35852.80, stdev=1109.60, samples=20 00:37:03.835 iops : min= 256, max= 296, avg=280.10, stdev= 8.67, samples=20 00:37:03.835 lat (msec) : 10=21.29%, 20=78.60%, 100=0.11% 00:37:03.835 cpu : usr=94.45%, sys=5.33%, ctx=18, majf=0, minf=133 00:37:03.835 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:03.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.835 issued rwts: total=2804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.835 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:03.835 00:37:03.835 Run status group 0 (all jobs): 00:37:03.835 READ: bw=110MiB/s (116MB/s), 35.0MiB/s-39.8MiB/s (36.7MB/s-41.7MB/s), io=1110MiB (1164MB), run=10004-10046msec 00:37:03.835 15:49:20 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:03.835 15:49:20 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:03.835 15:49:20 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:03.835 15:49:20 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:03.835 15:49:20 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:03.835 15:49:20 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:03.835 15:49:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.835 15:49:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:03.835 15:49:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.835 15:49:20 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:03.835 15:49:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.835 15:49:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:03.835 15:49:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.835 00:37:03.835 real 0m11.304s 00:37:03.835 user 0m45.614s 00:37:03.835 sys 0m1.871s 00:37:03.835 15:49:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:03.835 15:49:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:03.835 ************************************ 00:37:03.835 END TEST fio_dif_digest 00:37:03.835 ************************************ 00:37:03.835 15:49:20 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:03.835 15:49:20 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:03.835 15:49:20 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:03.835 15:49:20 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:03.835 15:49:20 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:03.835 15:49:20 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:03.835 15:49:20 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:03.835 15:49:20 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:03.835 rmmod nvme_tcp 00:37:03.835 rmmod nvme_fabrics 00:37:03.835 rmmod nvme_keyring 00:37:03.835 15:49:20 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:03.835 15:49:20 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:03.835 15:49:20 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:03.835 15:49:20 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 4081195 ']' 00:37:03.835 15:49:20 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 4081195 00:37:03.835 15:49:20 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 4081195 ']' 00:37:03.835 15:49:20 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 4081195 00:37:03.835 15:49:20 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:37:03.835 15:49:20 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:03.835 15:49:20 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4081195 00:37:03.835 15:49:20 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:03.835 15:49:20 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:03.835 15:49:20 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4081195' 00:37:03.835 killing process with pid 4081195 00:37:03.835 15:49:20 nvmf_dif -- common/autotest_common.sh@971 -- # kill 4081195 00:37:03.835 15:49:20 nvmf_dif -- common/autotest_common.sh@976 -- # wait 4081195 00:37:03.835 15:49:20 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:03.835 15:49:20 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:06.379 Waiting for block devices as requested 00:37:06.379 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:06.379 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:06.379 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:06.379 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:06.379 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:06.639 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:06.639 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:06.639 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:06.639 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:06.899 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:06.899 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:07.160 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:07.160 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:07.160 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:07.421 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:07.421 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:07.421 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:07.991 15:49:25 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:07.991 15:49:25 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:07.991 15:49:25 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:07.991 15:49:25 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:07.991 15:49:25 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:07.991 15:49:25 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:07.991 15:49:25 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:07.991 15:49:25 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:07.991 15:49:25 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:07.991 15:49:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:07.991 15:49:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:09.902 15:49:27 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:09.902 00:37:09.902 real 1m18.674s 00:37:09.902 user 7m57.737s 00:37:09.902 sys 0m22.648s 00:37:09.902 15:49:27 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:09.902 15:49:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:09.902 ************************************ 00:37:09.902 END TEST nvmf_dif 00:37:09.902 ************************************ 00:37:09.902 15:49:27 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:09.902 15:49:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:09.902 15:49:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:09.902 15:49:27 -- common/autotest_common.sh@10 -- # set +x 00:37:09.902 ************************************ 00:37:09.902 START TEST nvmf_abort_qd_sizes 00:37:09.902 ************************************ 00:37:09.902 15:49:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:10.162 * Looking for test storage... 00:37:10.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:10.162 15:49:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:10.162 15:49:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:37:10.162 15:49:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:10.162 15:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:10.162 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:10.162 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:10.162 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:10.162 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:10.162 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:10.162 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:10.162 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:10.162 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:10.162 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:10.162 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:10.162 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:10.162 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:10.162 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:10.162 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:10.162 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:10.162 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:10.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.163 --rc genhtml_branch_coverage=1 00:37:10.163 --rc genhtml_function_coverage=1 00:37:10.163 --rc genhtml_legend=1 00:37:10.163 --rc geninfo_all_blocks=1 00:37:10.163 --rc geninfo_unexecuted_blocks=1 00:37:10.163 00:37:10.163 ' 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:10.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.163 --rc genhtml_branch_coverage=1 00:37:10.163 --rc genhtml_function_coverage=1 00:37:10.163 --rc genhtml_legend=1 00:37:10.163 --rc geninfo_all_blocks=1 00:37:10.163 --rc geninfo_unexecuted_blocks=1 00:37:10.163 00:37:10.163 ' 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:10.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.163 --rc genhtml_branch_coverage=1 00:37:10.163 --rc genhtml_function_coverage=1 00:37:10.163 --rc genhtml_legend=1 00:37:10.163 --rc geninfo_all_blocks=1 00:37:10.163 --rc geninfo_unexecuted_blocks=1 00:37:10.163 00:37:10.163 ' 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:10.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.163 --rc genhtml_branch_coverage=1 00:37:10.163 --rc genhtml_function_coverage=1 00:37:10.163 --rc genhtml_legend=1 00:37:10.163 --rc geninfo_all_blocks=1 00:37:10.163 --rc geninfo_unexecuted_blocks=1 00:37:10.163 00:37:10.163 ' 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:10.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:10.163 15:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:18.299 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:18.299 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:18.299 Found net devices under 0000:31:00.0: cvl_0_0 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:18.299 Found net devices under 0000:31:00.1: cvl_0_1 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:18.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:18.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:37:18.299 00:37:18.299 --- 10.0.0.2 ping statistics --- 00:37:18.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:18.299 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:18.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:18.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:37:18.299 00:37:18.299 --- 10.0.0.1 ping statistics --- 00:37:18.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:18.299 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:18.299 15:49:35 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:20.843 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:20.843 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:20.843 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:20.843 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:20.843 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:21.104 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:21.104 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:21.104 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:21.104 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:21.104 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:21.104 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:21.104 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:21.104 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:21.104 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:21.104 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:21.104 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:21.104 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:21.676 15:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:21.676 15:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:21.676 15:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:21.676 15:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:21.676 15:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:21.676 15:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:21.676 15:49:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:21.676 15:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:21.676 15:49:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:21.676 15:49:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:21.676 15:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=4100837 00:37:21.676 15:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 4100837 00:37:21.676 15:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:21.676 15:49:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 4100837 ']' 00:37:21.676 15:49:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:21.676 15:49:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:21.676 15:49:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:21.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:21.676 15:49:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:21.676 15:49:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:21.676 [2024-11-06 15:49:39.469344] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:37:21.676 [2024-11-06 15:49:39.469406] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:21.676 [2024-11-06 15:49:39.564712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:21.676 [2024-11-06 15:49:39.602652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:21.676 [2024-11-06 15:49:39.602687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:21.676 [2024-11-06 15:49:39.602695] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:21.676 [2024-11-06 15:49:39.602702] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:21.676 [2024-11-06 15:49:39.602708] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:21.676 [2024-11-06 15:49:39.604362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:21.676 [2024-11-06 15:49:39.604517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:21.676 [2024-11-06 15:49:39.604916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:21.677 [2024-11-06 15:49:39.604917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:22.618 15:49:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:22.618 ************************************ 00:37:22.618 START TEST spdk_target_abort 00:37:22.618 ************************************ 00:37:22.618 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:37:22.618 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:22.618 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:22.618 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.618 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:22.880 spdk_targetn1 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:22.880 [2024-11-06 15:49:40.658985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:22.880 [2024-11-06 15:49:40.707460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:22.880 15:49:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:23.141 [2024-11-06 15:49:41.001398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:24 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:23.141 [2024-11-06 15:49:41.001446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0004 p:1 m:0 dnr:0 00:37:23.141 [2024-11-06 15:49:41.016359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:488 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:23.141 [2024-11-06 15:49:41.016393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:003e p:1 m:0 dnr:0 00:37:23.141 [2024-11-06 15:49:41.024216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:704 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:23.141 [2024-11-06 15:49:41.024244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0059 p:1 m:0 dnr:0 00:37:23.141 [2024-11-06 15:49:41.048249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1544 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:23.141 [2024-11-06 15:49:41.048280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00c3 p:1 m:0 dnr:0 00:37:23.141 [2024-11-06 15:49:41.048739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1576 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:23.141 [2024-11-06 15:49:41.048769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00c6 p:1 m:0 dnr:0 00:37:23.141 [2024-11-06 15:49:41.064258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2072 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:23.141 [2024-11-06 15:49:41.064287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:37:23.141 [2024-11-06 15:49:41.086230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2784 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:23.141 [2024-11-06 15:49:41.086259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:23.141 [2024-11-06 15:49:41.086366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2800 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:23.141 [2024-11-06 15:49:41.086378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:23.141 [2024-11-06 15:49:41.095369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3136 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:23.141 [2024-11-06 15:49:41.095397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0089 p:0 m:0 dnr:0 00:37:23.141 [2024-11-06 15:49:41.110377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3600 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:23.141 [2024-11-06 15:49:41.110406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00c5 p:0 m:0 dnr:0 00:37:23.141 [2024-11-06 15:49:41.118325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3896 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:23.141 [2024-11-06 15:49:41.118353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00e8 p:0 m:0 dnr:0 00:37:26.444 Initializing NVMe Controllers 00:37:26.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:26.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:26.444 Initialization complete. Launching workers. 00:37:26.444 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12477, failed: 11 00:37:26.444 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2228, failed to submit 10260 00:37:26.444 success 772, unsuccessful 1456, failed 0 00:37:26.444 15:49:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:26.444 15:49:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:26.444 [2024-11-06 15:49:44.195928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:640 len:8 PRP1 0x200004e58000 PRP2 0x0 00:37:26.444 [2024-11-06 15:49:44.195967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:37:26.444 [2024-11-06 15:49:44.250842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:1848 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:37:26.444 [2024-11-06 15:49:44.250869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:00f9 p:1 m:0 dnr:0 00:37:26.444 [2024-11-06 15:49:44.314701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:3440 len:8 PRP1 0x200004e42000 PRP2 0x0 00:37:26.444 [2024-11-06 15:49:44.314725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:00b5 p:0 m:0 dnr:0 00:37:26.444 [2024-11-06 15:49:44.322813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:3512 len:8 PRP1 0x200004e40000 PRP2 0x0 00:37:26.444 [2024-11-06 15:49:44.322833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:00c7 p:0 m:0 dnr:0 00:37:29.745 [2024-11-06 15:49:47.312108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf15410 is same with the state(6) to be set 00:37:29.745 Initializing NVMe Controllers 00:37:29.745 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:29.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:29.745 Initialization complete. Launching workers. 00:37:29.745 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8518, failed: 4 00:37:29.745 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1182, failed to submit 7340 00:37:29.745 success 366, unsuccessful 816, failed 0 00:37:29.745 15:49:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:29.745 15:49:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:29.746 [2024-11-06 15:49:47.494428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:154 nsid:1 lba:2000 len:8 PRP1 0x200004ae2000 PRP2 0x0 00:37:29.746 [2024-11-06 15:49:47.494452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:154 cdw0:0 sqhd:00c8 p:0 m:0 dnr:0 00:37:29.746 [2024-11-06 15:49:47.509189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:186 nsid:1 lba:3688 len:8 PRP1 0x200004b1e000 PRP2 0x0 00:37:29.746 [2024-11-06 15:49:47.509205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:186 cdw0:0 sqhd:009d p:1 m:0 dnr:0 00:37:33.045 Initializing NVMe Controllers 00:37:33.045 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:33.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:33.045 Initialization complete. Launching workers. 00:37:33.045 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43766, failed: 2 00:37:33.045 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2777, failed to submit 40991 00:37:33.045 success 613, unsuccessful 2164, failed 0 00:37:33.045 15:49:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:33.045 15:49:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.045 15:49:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.045 15:49:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.045 15:49:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:33.045 15:49:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.045 15:49:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:34.429 15:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.429 15:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 4100837 00:37:34.429 15:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 4100837 ']' 00:37:34.429 15:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 4100837 00:37:34.429 15:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:37:34.429 15:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:34.429 15:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4100837 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4100837' 00:37:34.689 killing process with pid 4100837 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 4100837 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 4100837 00:37:34.689 00:37:34.689 real 0m12.183s 00:37:34.689 user 0m49.452s 00:37:34.689 sys 0m2.112s 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:34.689 ************************************ 00:37:34.689 END TEST spdk_target_abort 00:37:34.689 ************************************ 00:37:34.689 15:49:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:34.689 15:49:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:34.689 15:49:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:34.689 15:49:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:34.689 ************************************ 00:37:34.689 START TEST kernel_target_abort 00:37:34.689 ************************************ 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:34.689 15:49:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:37.990 Waiting for block devices as requested 00:37:38.251 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:38.251 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:38.251 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:38.511 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:38.511 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:38.511 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:38.771 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:38.771 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:38.771 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:39.032 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:39.032 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:39.292 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:39.292 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:39.292 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:39.292 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:39.552 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:39.552 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:39.812 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:39.812 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:39.812 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:39.812 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:37:39.812 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:39.812 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:39.812 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:39.812 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:39.812 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:40.074 No valid GPT data, bailing 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:37:40.074 00:37:40.074 Discovery Log Number of Records 2, Generation counter 2 00:37:40.074 =====Discovery Log Entry 0====== 00:37:40.074 trtype: tcp 00:37:40.074 adrfam: ipv4 00:37:40.074 subtype: current discovery subsystem 00:37:40.074 treq: not specified, sq flow control disable supported 00:37:40.074 portid: 1 00:37:40.074 trsvcid: 4420 00:37:40.074 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:40.074 traddr: 10.0.0.1 00:37:40.074 eflags: none 00:37:40.074 sectype: none 00:37:40.074 =====Discovery Log Entry 1====== 00:37:40.074 trtype: tcp 00:37:40.074 adrfam: ipv4 00:37:40.074 subtype: nvme subsystem 00:37:40.074 treq: not specified, sq flow control disable supported 00:37:40.074 portid: 1 00:37:40.074 trsvcid: 4420 00:37:40.074 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:40.074 traddr: 10.0.0.1 00:37:40.074 eflags: none 00:37:40.074 sectype: none 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:40.074 15:49:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:43.372 Initializing NVMe Controllers 00:37:43.372 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:43.372 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:43.372 Initialization complete. Launching workers. 00:37:43.372 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67516, failed: 0 00:37:43.372 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67516, failed to submit 0 00:37:43.372 success 0, unsuccessful 67516, failed 0 00:37:43.372 15:50:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:43.372 15:50:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:46.669 Initializing NVMe Controllers 00:37:46.669 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:46.669 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:46.669 Initialization complete. Launching workers. 00:37:46.669 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 119529, failed: 0 00:37:46.669 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30102, failed to submit 89427 00:37:46.669 success 0, unsuccessful 30102, failed 0 00:37:46.669 15:50:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:46.669 15:50:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:49.969 Initializing NVMe Controllers 00:37:49.969 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:49.969 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:49.969 Initialization complete. Launching workers. 00:37:49.969 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145384, failed: 0 00:37:49.969 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36378, failed to submit 109006 00:37:49.969 success 0, unsuccessful 36378, failed 0 00:37:49.969 15:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:49.969 15:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:49.969 15:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:37:49.969 15:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:49.969 15:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:49.969 15:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:49.969 15:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:49.969 15:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:49.969 15:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:49.969 15:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:53.415 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:53.415 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:53.415 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:53.415 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:53.415 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:53.415 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:53.415 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:53.415 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:53.415 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:53.415 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:53.415 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:53.415 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:53.415 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:53.415 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:53.415 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:53.415 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:54.798 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:55.372 00:37:55.372 real 0m20.447s 00:37:55.372 user 0m9.969s 00:37:55.372 sys 0m6.089s 00:37:55.372 15:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:55.372 15:50:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:55.372 ************************************ 00:37:55.372 END TEST kernel_target_abort 00:37:55.372 ************************************ 00:37:55.372 15:50:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:55.372 15:50:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:55.372 15:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:55.372 15:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:55.372 15:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:55.372 15:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:55.372 15:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:55.372 15:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:55.372 rmmod nvme_tcp 00:37:55.372 rmmod nvme_fabrics 00:37:55.372 rmmod nvme_keyring 00:37:55.372 15:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:55.372 15:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:55.372 15:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:55.372 15:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 4100837 ']' 00:37:55.372 15:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 4100837 00:37:55.372 15:50:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 4100837 ']' 00:37:55.372 15:50:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 4100837 00:37:55.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (4100837) - No such process 00:37:55.372 15:50:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 4100837 is not found' 00:37:55.372 Process with pid 4100837 is not found 00:37:55.372 15:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:55.372 15:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:58.673 Waiting for block devices as requested 00:37:58.673 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:58.673 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:58.933 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:58.933 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:58.933 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:59.193 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:59.193 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:59.193 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:59.454 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:59.454 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:59.714 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:59.714 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:59.714 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:59.974 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:59.974 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:59.974 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:59.974 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:00.545 15:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:00.545 15:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:00.545 15:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:00.545 15:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:00.545 15:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:00.545 15:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:00.545 15:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:00.545 15:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:00.545 15:50:18 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:00.545 15:50:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:00.545 15:50:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:02.455 15:50:20 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:02.455 00:38:02.455 real 0m52.509s 00:38:02.455 user 1m4.828s 00:38:02.455 sys 0m19.268s 00:38:02.455 15:50:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:02.455 15:50:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:02.455 ************************************ 00:38:02.455 END TEST nvmf_abort_qd_sizes 00:38:02.455 ************************************ 00:38:02.455 15:50:20 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:02.455 15:50:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:02.455 15:50:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:02.455 15:50:20 -- common/autotest_common.sh@10 -- # set +x 00:38:02.455 ************************************ 00:38:02.455 START TEST keyring_file 00:38:02.455 ************************************ 00:38:02.455 15:50:20 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:02.716 * Looking for test storage... 00:38:02.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:02.716 15:50:20 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:02.716 15:50:20 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:38:02.716 15:50:20 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:02.716 15:50:20 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:02.716 15:50:20 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:02.716 15:50:20 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:02.716 15:50:20 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:02.716 15:50:20 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:02.716 15:50:20 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:02.716 15:50:20 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:02.716 15:50:20 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:02.716 15:50:20 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:02.716 15:50:20 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:02.716 15:50:20 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:02.716 15:50:20 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:02.716 15:50:20 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:02.716 15:50:20 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:02.716 15:50:20 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:02.716 15:50:20 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:02.716 15:50:20 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:02.717 15:50:20 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:02.717 15:50:20 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:02.717 15:50:20 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:02.717 15:50:20 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:02.717 15:50:20 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:02.717 15:50:20 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:02.717 15:50:20 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:02.717 15:50:20 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:02.717 15:50:20 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:02.717 15:50:20 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:02.717 15:50:20 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:02.717 15:50:20 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:02.717 15:50:20 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:02.717 15:50:20 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:02.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.717 --rc genhtml_branch_coverage=1 00:38:02.717 --rc genhtml_function_coverage=1 00:38:02.717 --rc genhtml_legend=1 00:38:02.717 --rc geninfo_all_blocks=1 00:38:02.717 --rc geninfo_unexecuted_blocks=1 00:38:02.717 00:38:02.717 ' 00:38:02.717 15:50:20 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:02.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.717 --rc genhtml_branch_coverage=1 00:38:02.717 --rc genhtml_function_coverage=1 00:38:02.717 --rc genhtml_legend=1 00:38:02.717 --rc geninfo_all_blocks=1 00:38:02.717 --rc geninfo_unexecuted_blocks=1 00:38:02.717 00:38:02.717 ' 00:38:02.717 15:50:20 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:02.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.717 --rc genhtml_branch_coverage=1 00:38:02.717 --rc genhtml_function_coverage=1 00:38:02.717 --rc genhtml_legend=1 00:38:02.717 --rc geninfo_all_blocks=1 00:38:02.717 --rc geninfo_unexecuted_blocks=1 00:38:02.717 00:38:02.717 ' 00:38:02.717 15:50:20 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:02.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.717 --rc genhtml_branch_coverage=1 00:38:02.717 --rc genhtml_function_coverage=1 00:38:02.717 --rc genhtml_legend=1 00:38:02.717 --rc geninfo_all_blocks=1 00:38:02.717 --rc geninfo_unexecuted_blocks=1 00:38:02.717 00:38:02.717 ' 00:38:02.717 15:50:20 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:02.717 15:50:20 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:02.717 15:50:20 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:02.717 15:50:20 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:02.717 15:50:20 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:02.717 15:50:20 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:02.717 15:50:20 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.717 15:50:20 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.717 15:50:20 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.717 15:50:20 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:02.717 15:50:20 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:02.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:02.717 15:50:20 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:02.717 15:50:20 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:02.717 15:50:20 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:02.717 15:50:20 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:02.717 15:50:20 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:02.717 15:50:20 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:02.717 15:50:20 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:02.717 15:50:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:02.717 15:50:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:02.717 15:50:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:02.717 15:50:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:02.717 15:50:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:02.717 15:50:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Zdwl8G9SMm 00:38:02.717 15:50:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:02.717 15:50:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:02.978 15:50:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Zdwl8G9SMm 00:38:02.978 15:50:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Zdwl8G9SMm 00:38:02.978 15:50:20 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Zdwl8G9SMm 00:38:02.978 15:50:20 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:02.978 15:50:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:02.978 15:50:20 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:02.978 15:50:20 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:02.978 15:50:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:02.978 15:50:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:02.978 15:50:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.bj1bNoIUn5 00:38:02.978 15:50:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:02.978 15:50:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:02.978 15:50:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:02.978 15:50:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:02.978 15:50:20 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:02.978 15:50:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:02.978 15:50:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:02.978 15:50:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.bj1bNoIUn5 00:38:02.978 15:50:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.bj1bNoIUn5 00:38:02.978 15:50:20 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.bj1bNoIUn5 00:38:02.978 15:50:20 keyring_file -- keyring/file.sh@30 -- # tgtpid=4111395 00:38:02.978 15:50:20 keyring_file -- keyring/file.sh@32 -- # waitforlisten 4111395 00:38:02.978 15:50:20 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:02.978 15:50:20 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 4111395 ']' 00:38:02.978 15:50:20 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:02.979 15:50:20 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:02.979 15:50:20 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:02.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:02.979 15:50:20 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:02.979 15:50:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:02.979 [2024-11-06 15:50:20.847963] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:38:02.979 [2024-11-06 15:50:20.848034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4111395 ] 00:38:02.979 [2024-11-06 15:50:20.943870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.238 [2024-11-06 15:50:20.997007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:03.809 15:50:21 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:03.809 15:50:21 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:03.809 15:50:21 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:03.809 15:50:21 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:03.809 15:50:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:03.809 [2024-11-06 15:50:21.658026] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:03.809 null0 00:38:03.809 [2024-11-06 15:50:21.690072] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:03.810 [2024-11-06 15:50:21.690327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:03.810 15:50:21 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:03.810 15:50:21 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:03.810 15:50:21 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:03.810 15:50:21 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:03.810 15:50:21 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:03.810 15:50:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:03.810 15:50:21 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:03.810 15:50:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:03.810 15:50:21 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:03.810 15:50:21 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:03.810 15:50:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:03.810 [2024-11-06 15:50:21.722141] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:03.810 request: 00:38:03.810 { 00:38:03.810 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:03.810 "secure_channel": false, 00:38:03.810 "listen_address": { 00:38:03.810 "trtype": "tcp", 00:38:03.810 "traddr": "127.0.0.1", 00:38:03.810 "trsvcid": "4420" 00:38:03.810 }, 00:38:03.810 "method": "nvmf_subsystem_add_listener", 00:38:03.810 "req_id": 1 00:38:03.810 } 00:38:03.810 Got JSON-RPC error response 00:38:03.810 response: 00:38:03.810 { 00:38:03.810 "code": -32602, 00:38:03.810 "message": "Invalid parameters" 00:38:03.810 } 00:38:03.810 15:50:21 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:03.810 15:50:21 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:03.810 15:50:21 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:03.810 15:50:21 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:03.810 15:50:21 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:03.810 15:50:21 keyring_file -- keyring/file.sh@47 -- # bperfpid=4111442 00:38:03.810 15:50:21 keyring_file -- keyring/file.sh@49 -- # waitforlisten 4111442 /var/tmp/bperf.sock 00:38:03.810 15:50:21 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:03.810 15:50:21 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 4111442 ']' 00:38:03.810 15:50:21 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:03.810 15:50:21 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:03.810 15:50:21 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:03.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:03.810 15:50:21 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:03.810 15:50:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:03.810 [2024-11-06 15:50:21.789658] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:38:03.810 [2024-11-06 15:50:21.789714] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4111442 ] 00:38:04.070 [2024-11-06 15:50:21.877202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:04.070 [2024-11-06 15:50:21.913968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:04.644 15:50:22 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:04.644 15:50:22 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:04.644 15:50:22 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zdwl8G9SMm 00:38:04.644 15:50:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Zdwl8G9SMm 00:38:04.905 15:50:22 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.bj1bNoIUn5 00:38:04.905 15:50:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.bj1bNoIUn5 00:38:05.165 15:50:22 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:05.165 15:50:22 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:05.166 15:50:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:05.166 15:50:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:05.166 15:50:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:05.426 15:50:23 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Zdwl8G9SMm == \/\t\m\p\/\t\m\p\.\Z\d\w\l\8\G\9\S\M\m ]] 00:38:05.426 15:50:23 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:05.426 15:50:23 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:05.426 15:50:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:05.426 15:50:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:05.426 15:50:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:05.426 15:50:23 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.bj1bNoIUn5 == \/\t\m\p\/\t\m\p\.\b\j\1\b\N\o\I\U\n\5 ]] 00:38:05.426 15:50:23 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:05.426 15:50:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:05.426 15:50:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:05.426 15:50:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:05.426 15:50:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:05.426 15:50:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:05.687 15:50:23 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:05.687 15:50:23 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:05.687 15:50:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:05.687 15:50:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:05.687 15:50:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:05.687 15:50:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:05.687 15:50:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:05.948 15:50:23 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:05.948 15:50:23 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:05.948 15:50:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:05.948 [2024-11-06 15:50:23.886608] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:06.208 nvme0n1 00:38:06.208 15:50:23 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:06.208 15:50:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:06.208 15:50:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:06.208 15:50:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:06.208 15:50:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:06.208 15:50:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:06.208 15:50:24 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:06.208 15:50:24 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:06.208 15:50:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:06.208 15:50:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:06.208 15:50:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:06.208 15:50:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:06.208 15:50:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:06.469 15:50:24 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:06.469 15:50:24 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:06.469 Running I/O for 1 seconds... 00:38:07.853 18125.00 IOPS, 70.80 MiB/s 00:38:07.853 Latency(us) 00:38:07.853 [2024-11-06T14:50:25.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:07.853 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:07.853 nvme0n1 : 1.00 18186.60 71.04 0.00 0.00 7025.75 2170.88 15619.41 00:38:07.853 [2024-11-06T14:50:25.836Z] =================================================================================================================== 00:38:07.853 [2024-11-06T14:50:25.836Z] Total : 18186.60 71.04 0.00 0.00 7025.75 2170.88 15619.41 00:38:07.853 { 00:38:07.853 "results": [ 00:38:07.853 { 00:38:07.853 "job": "nvme0n1", 00:38:07.853 "core_mask": "0x2", 00:38:07.853 "workload": "randrw", 00:38:07.853 "percentage": 50, 00:38:07.853 "status": "finished", 00:38:07.853 "queue_depth": 128, 00:38:07.853 "io_size": 4096, 00:38:07.853 "runtime": 1.003706, 00:38:07.853 "iops": 18186.600458700057, 00:38:07.853 "mibps": 71.0414080417971, 00:38:07.853 "io_failed": 0, 00:38:07.853 "io_timeout": 0, 00:38:07.853 "avg_latency_us": 7025.751100398086, 00:38:07.853 "min_latency_us": 2170.88, 00:38:07.853 "max_latency_us": 15619.413333333334 00:38:07.853 } 00:38:07.853 ], 00:38:07.853 "core_count": 1 00:38:07.853 } 00:38:07.853 15:50:25 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:07.853 15:50:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:07.853 15:50:25 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:07.853 15:50:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:07.853 15:50:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:07.853 15:50:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:07.853 15:50:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:07.853 15:50:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:07.853 15:50:25 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:07.853 15:50:25 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:07.853 15:50:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:07.853 15:50:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:07.853 15:50:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:07.853 15:50:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:07.853 15:50:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:08.115 15:50:25 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:08.115 15:50:25 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:08.115 15:50:25 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:08.115 15:50:25 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:08.115 15:50:25 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:08.115 15:50:25 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:08.115 15:50:25 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:08.115 15:50:25 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:08.115 15:50:25 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:08.115 15:50:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:08.377 [2024-11-06 15:50:26.162733] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:08.377 [2024-11-06 15:50:26.163170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb2ad0 (107): Transport endpoint is not connected 00:38:08.377 [2024-11-06 15:50:26.164165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb2ad0 (9): Bad file descriptor 00:38:08.377 [2024-11-06 15:50:26.165168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:08.377 [2024-11-06 15:50:26.165176] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:08.377 [2024-11-06 15:50:26.165183] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:08.377 [2024-11-06 15:50:26.165190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:08.377 request: 00:38:08.377 { 00:38:08.377 "name": "nvme0", 00:38:08.377 "trtype": "tcp", 00:38:08.377 "traddr": "127.0.0.1", 00:38:08.377 "adrfam": "ipv4", 00:38:08.377 "trsvcid": "4420", 00:38:08.377 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:08.377 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:08.377 "prchk_reftag": false, 00:38:08.377 "prchk_guard": false, 00:38:08.377 "hdgst": false, 00:38:08.377 "ddgst": false, 00:38:08.377 "psk": "key1", 00:38:08.377 "allow_unrecognized_csi": false, 00:38:08.377 "method": "bdev_nvme_attach_controller", 00:38:08.377 "req_id": 1 00:38:08.377 } 00:38:08.377 Got JSON-RPC error response 00:38:08.377 response: 00:38:08.377 { 00:38:08.377 "code": -5, 00:38:08.377 "message": "Input/output error" 00:38:08.377 } 00:38:08.377 15:50:26 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:08.377 15:50:26 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:08.377 15:50:26 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:08.377 15:50:26 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:08.377 15:50:26 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:08.377 15:50:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:08.377 15:50:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:08.377 15:50:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:08.377 15:50:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:08.377 15:50:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:08.639 15:50:26 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:08.639 15:50:26 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:08.639 15:50:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:08.639 15:50:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:08.639 15:50:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:08.639 15:50:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:08.639 15:50:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:08.639 15:50:26 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:08.639 15:50:26 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:08.639 15:50:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:08.900 15:50:26 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:08.900 15:50:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:09.161 15:50:26 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:09.161 15:50:26 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:09.161 15:50:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:09.161 15:50:27 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:09.161 15:50:27 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.Zdwl8G9SMm 00:38:09.161 15:50:27 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zdwl8G9SMm 00:38:09.161 15:50:27 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:09.161 15:50:27 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zdwl8G9SMm 00:38:09.161 15:50:27 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:09.161 15:50:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:09.161 15:50:27 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:09.161 15:50:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:09.161 15:50:27 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zdwl8G9SMm 00:38:09.161 15:50:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Zdwl8G9SMm 00:38:09.422 [2024-11-06 15:50:27.215891] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Zdwl8G9SMm': 0100660 00:38:09.422 [2024-11-06 15:50:27.215911] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:09.422 request: 00:38:09.422 { 00:38:09.422 "name": "key0", 00:38:09.422 "path": "/tmp/tmp.Zdwl8G9SMm", 00:38:09.422 "method": "keyring_file_add_key", 00:38:09.422 "req_id": 1 00:38:09.422 } 00:38:09.422 Got JSON-RPC error response 00:38:09.422 response: 00:38:09.422 { 00:38:09.422 "code": -1, 00:38:09.422 "message": "Operation not permitted" 00:38:09.422 } 00:38:09.422 15:50:27 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:09.422 15:50:27 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:09.422 15:50:27 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:09.422 15:50:27 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:09.422 15:50:27 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.Zdwl8G9SMm 00:38:09.422 15:50:27 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zdwl8G9SMm 00:38:09.422 15:50:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Zdwl8G9SMm 00:38:09.683 15:50:27 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.Zdwl8G9SMm 00:38:09.683 15:50:27 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:09.683 15:50:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:09.683 15:50:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:09.683 15:50:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:09.683 15:50:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:09.683 15:50:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:09.683 15:50:27 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:09.683 15:50:27 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:09.683 15:50:27 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:09.683 15:50:27 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:09.683 15:50:27 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:09.683 15:50:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:09.683 15:50:27 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:09.683 15:50:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:09.683 15:50:27 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:09.683 15:50:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:09.944 [2024-11-06 15:50:27.741236] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Zdwl8G9SMm': No such file or directory 00:38:09.944 [2024-11-06 15:50:27.741249] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:09.944 [2024-11-06 15:50:27.741263] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:09.944 [2024-11-06 15:50:27.741270] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:09.944 [2024-11-06 15:50:27.741275] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:09.944 [2024-11-06 15:50:27.741280] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:09.944 request: 00:38:09.944 { 00:38:09.944 "name": "nvme0", 00:38:09.944 "trtype": "tcp", 00:38:09.944 "traddr": "127.0.0.1", 00:38:09.944 "adrfam": "ipv4", 00:38:09.944 "trsvcid": "4420", 00:38:09.944 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:09.944 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:09.944 "prchk_reftag": false, 00:38:09.944 "prchk_guard": false, 00:38:09.944 "hdgst": false, 00:38:09.944 "ddgst": false, 00:38:09.944 "psk": "key0", 00:38:09.944 "allow_unrecognized_csi": false, 00:38:09.944 "method": "bdev_nvme_attach_controller", 00:38:09.944 "req_id": 1 00:38:09.944 } 00:38:09.944 Got JSON-RPC error response 00:38:09.944 response: 00:38:09.944 { 00:38:09.944 "code": -19, 00:38:09.944 "message": "No such device" 00:38:09.944 } 00:38:09.944 15:50:27 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:09.944 15:50:27 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:09.944 15:50:27 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:09.944 15:50:27 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:09.944 15:50:27 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:09.944 15:50:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:10.205 15:50:27 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:10.205 15:50:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:10.205 15:50:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:10.205 15:50:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:10.205 15:50:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:10.205 15:50:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:10.205 15:50:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.EOr78vznjF 00:38:10.205 15:50:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:10.205 15:50:27 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:10.205 15:50:27 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:10.205 15:50:27 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:10.205 15:50:27 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:10.205 15:50:27 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:10.205 15:50:27 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:10.205 15:50:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.EOr78vznjF 00:38:10.205 15:50:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.EOr78vznjF 00:38:10.205 15:50:27 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.EOr78vznjF 00:38:10.205 15:50:27 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EOr78vznjF 00:38:10.205 15:50:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EOr78vznjF 00:38:10.205 15:50:28 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:10.205 15:50:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:10.465 nvme0n1 00:38:10.465 15:50:28 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:10.465 15:50:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:10.465 15:50:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:10.465 15:50:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:10.465 15:50:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:10.465 15:50:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:10.725 15:50:28 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:10.725 15:50:28 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:10.725 15:50:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:10.984 15:50:28 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:10.984 15:50:28 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:10.984 15:50:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:10.984 15:50:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:10.984 15:50:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:10.984 15:50:28 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:10.984 15:50:28 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:10.984 15:50:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:10.984 15:50:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:10.984 15:50:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:10.984 15:50:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:10.984 15:50:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:11.244 15:50:29 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:11.244 15:50:29 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:11.244 15:50:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:11.505 15:50:29 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:11.505 15:50:29 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:11.505 15:50:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:11.505 15:50:29 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:11.505 15:50:29 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EOr78vznjF 00:38:11.505 15:50:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EOr78vznjF 00:38:11.766 15:50:29 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.bj1bNoIUn5 00:38:11.766 15:50:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.bj1bNoIUn5 00:38:12.027 15:50:29 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:12.027 15:50:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:12.027 nvme0n1 00:38:12.027 15:50:29 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:12.027 15:50:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:12.289 15:50:30 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:12.289 "subsystems": [ 00:38:12.289 { 00:38:12.289 "subsystem": "keyring", 00:38:12.289 "config": [ 00:38:12.289 { 00:38:12.289 "method": "keyring_file_add_key", 00:38:12.289 "params": { 00:38:12.289 "name": "key0", 00:38:12.289 "path": "/tmp/tmp.EOr78vznjF" 00:38:12.289 } 00:38:12.289 }, 00:38:12.289 { 00:38:12.289 "method": "keyring_file_add_key", 00:38:12.289 "params": { 00:38:12.289 "name": "key1", 00:38:12.289 "path": "/tmp/tmp.bj1bNoIUn5" 00:38:12.289 } 00:38:12.289 } 00:38:12.289 ] 00:38:12.289 }, 00:38:12.289 { 00:38:12.289 "subsystem": "iobuf", 00:38:12.289 "config": [ 00:38:12.289 { 00:38:12.289 "method": "iobuf_set_options", 00:38:12.289 "params": { 00:38:12.289 "small_pool_count": 8192, 00:38:12.289 "large_pool_count": 1024, 00:38:12.289 "small_bufsize": 8192, 00:38:12.289 "large_bufsize": 135168, 00:38:12.289 "enable_numa": false 00:38:12.289 } 00:38:12.289 } 00:38:12.289 ] 00:38:12.289 }, 00:38:12.289 { 00:38:12.289 "subsystem": "sock", 00:38:12.289 "config": [ 00:38:12.289 { 00:38:12.289 "method": "sock_set_default_impl", 00:38:12.289 "params": { 00:38:12.289 "impl_name": "posix" 00:38:12.289 } 00:38:12.289 }, 00:38:12.289 { 00:38:12.289 "method": "sock_impl_set_options", 00:38:12.289 "params": { 00:38:12.289 "impl_name": "ssl", 00:38:12.289 "recv_buf_size": 4096, 00:38:12.289 "send_buf_size": 4096, 00:38:12.289 "enable_recv_pipe": true, 00:38:12.289 "enable_quickack": false, 00:38:12.289 "enable_placement_id": 0, 00:38:12.289 "enable_zerocopy_send_server": true, 00:38:12.289 "enable_zerocopy_send_client": false, 00:38:12.290 "zerocopy_threshold": 0, 00:38:12.290 "tls_version": 0, 00:38:12.290 "enable_ktls": false 00:38:12.290 } 00:38:12.290 }, 00:38:12.290 { 00:38:12.290 "method": "sock_impl_set_options", 00:38:12.290 "params": { 00:38:12.290 "impl_name": "posix", 00:38:12.290 "recv_buf_size": 2097152, 00:38:12.290 "send_buf_size": 2097152, 00:38:12.290 "enable_recv_pipe": true, 00:38:12.290 "enable_quickack": false, 00:38:12.290 "enable_placement_id": 0, 00:38:12.290 "enable_zerocopy_send_server": true, 00:38:12.290 "enable_zerocopy_send_client": false, 00:38:12.290 "zerocopy_threshold": 0, 00:38:12.290 "tls_version": 0, 00:38:12.290 "enable_ktls": false 00:38:12.290 } 00:38:12.290 } 00:38:12.290 ] 00:38:12.290 }, 00:38:12.290 { 00:38:12.290 "subsystem": "vmd", 00:38:12.290 "config": [] 00:38:12.290 }, 00:38:12.290 { 00:38:12.290 "subsystem": "accel", 00:38:12.290 "config": [ 00:38:12.290 { 00:38:12.290 "method": "accel_set_options", 00:38:12.290 "params": { 00:38:12.290 "small_cache_size": 128, 00:38:12.290 "large_cache_size": 16, 00:38:12.290 "task_count": 2048, 00:38:12.290 "sequence_count": 2048, 00:38:12.290 "buf_count": 2048 00:38:12.290 } 00:38:12.290 } 00:38:12.290 ] 00:38:12.290 }, 00:38:12.290 { 00:38:12.290 "subsystem": "bdev", 00:38:12.290 "config": [ 00:38:12.290 { 00:38:12.290 "method": "bdev_set_options", 00:38:12.290 "params": { 00:38:12.290 "bdev_io_pool_size": 65535, 00:38:12.290 "bdev_io_cache_size": 256, 00:38:12.290 "bdev_auto_examine": true, 00:38:12.290 "iobuf_small_cache_size": 128, 00:38:12.290 "iobuf_large_cache_size": 16 00:38:12.290 } 00:38:12.290 }, 00:38:12.290 { 00:38:12.290 "method": "bdev_raid_set_options", 00:38:12.290 "params": { 00:38:12.290 "process_window_size_kb": 1024, 00:38:12.290 "process_max_bandwidth_mb_sec": 0 00:38:12.290 } 00:38:12.290 }, 00:38:12.290 { 00:38:12.290 "method": "bdev_iscsi_set_options", 00:38:12.290 "params": { 00:38:12.290 "timeout_sec": 30 00:38:12.290 } 00:38:12.290 }, 00:38:12.290 { 00:38:12.290 "method": "bdev_nvme_set_options", 00:38:12.290 "params": { 00:38:12.290 "action_on_timeout": "none", 00:38:12.290 "timeout_us": 0, 00:38:12.290 "timeout_admin_us": 0, 00:38:12.290 "keep_alive_timeout_ms": 10000, 00:38:12.290 "arbitration_burst": 0, 00:38:12.290 "low_priority_weight": 0, 00:38:12.290 "medium_priority_weight": 0, 00:38:12.290 "high_priority_weight": 0, 00:38:12.290 "nvme_adminq_poll_period_us": 10000, 00:38:12.290 "nvme_ioq_poll_period_us": 0, 00:38:12.290 "io_queue_requests": 512, 00:38:12.290 "delay_cmd_submit": true, 00:38:12.290 "transport_retry_count": 4, 00:38:12.290 "bdev_retry_count": 3, 00:38:12.290 "transport_ack_timeout": 0, 00:38:12.290 "ctrlr_loss_timeout_sec": 0, 00:38:12.290 "reconnect_delay_sec": 0, 00:38:12.290 "fast_io_fail_timeout_sec": 0, 00:38:12.290 "disable_auto_failback": false, 00:38:12.290 "generate_uuids": false, 00:38:12.290 "transport_tos": 0, 00:38:12.290 "nvme_error_stat": false, 00:38:12.290 "rdma_srq_size": 0, 00:38:12.290 "io_path_stat": false, 00:38:12.290 "allow_accel_sequence": false, 00:38:12.290 "rdma_max_cq_size": 0, 00:38:12.290 "rdma_cm_event_timeout_ms": 0, 00:38:12.290 "dhchap_digests": [ 00:38:12.290 "sha256", 00:38:12.290 "sha384", 00:38:12.290 "sha512" 00:38:12.290 ], 00:38:12.290 "dhchap_dhgroups": [ 00:38:12.290 "null", 00:38:12.290 "ffdhe2048", 00:38:12.290 "ffdhe3072", 00:38:12.290 "ffdhe4096", 00:38:12.290 "ffdhe6144", 00:38:12.290 "ffdhe8192" 00:38:12.290 ] 00:38:12.290 } 00:38:12.290 }, 00:38:12.290 { 00:38:12.290 "method": "bdev_nvme_attach_controller", 00:38:12.290 "params": { 00:38:12.290 "name": "nvme0", 00:38:12.290 "trtype": "TCP", 00:38:12.290 "adrfam": "IPv4", 00:38:12.290 "traddr": "127.0.0.1", 00:38:12.290 "trsvcid": "4420", 00:38:12.290 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:12.290 "prchk_reftag": false, 00:38:12.290 "prchk_guard": false, 00:38:12.290 "ctrlr_loss_timeout_sec": 0, 00:38:12.290 "reconnect_delay_sec": 0, 00:38:12.290 "fast_io_fail_timeout_sec": 0, 00:38:12.290 "psk": "key0", 00:38:12.290 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:12.290 "hdgst": false, 00:38:12.290 "ddgst": false, 00:38:12.290 "multipath": "multipath" 00:38:12.290 } 00:38:12.290 }, 00:38:12.290 { 00:38:12.290 "method": "bdev_nvme_set_hotplug", 00:38:12.290 "params": { 00:38:12.290 "period_us": 100000, 00:38:12.290 "enable": false 00:38:12.290 } 00:38:12.290 }, 00:38:12.290 { 00:38:12.290 "method": "bdev_wait_for_examine" 00:38:12.290 } 00:38:12.290 ] 00:38:12.290 }, 00:38:12.290 { 00:38:12.290 "subsystem": "nbd", 00:38:12.290 "config": [] 00:38:12.290 } 00:38:12.290 ] 00:38:12.290 }' 00:38:12.290 15:50:30 keyring_file -- keyring/file.sh@115 -- # killprocess 4111442 00:38:12.290 15:50:30 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 4111442 ']' 00:38:12.290 15:50:30 keyring_file -- common/autotest_common.sh@956 -- # kill -0 4111442 00:38:12.290 15:50:30 keyring_file -- common/autotest_common.sh@957 -- # uname 00:38:12.290 15:50:30 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:12.290 15:50:30 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4111442 00:38:12.551 15:50:30 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:12.551 15:50:30 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:12.551 15:50:30 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4111442' 00:38:12.551 killing process with pid 4111442 00:38:12.551 15:50:30 keyring_file -- common/autotest_common.sh@971 -- # kill 4111442 00:38:12.551 Received shutdown signal, test time was about 1.000000 seconds 00:38:12.551 00:38:12.551 Latency(us) 00:38:12.551 [2024-11-06T14:50:30.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:12.551 [2024-11-06T14:50:30.534Z] =================================================================================================================== 00:38:12.551 [2024-11-06T14:50:30.534Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:12.551 15:50:30 keyring_file -- common/autotest_common.sh@976 -- # wait 4111442 00:38:12.551 15:50:30 keyring_file -- keyring/file.sh@118 -- # bperfpid=4113255 00:38:12.551 15:50:30 keyring_file -- keyring/file.sh@120 -- # waitforlisten 4113255 /var/tmp/bperf.sock 00:38:12.551 15:50:30 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 4113255 ']' 00:38:12.552 15:50:30 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:12.552 15:50:30 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:12.552 15:50:30 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:12.552 15:50:30 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:12.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:12.552 15:50:30 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:12.552 15:50:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:12.552 15:50:30 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:12.552 "subsystems": [ 00:38:12.552 { 00:38:12.552 "subsystem": "keyring", 00:38:12.552 "config": [ 00:38:12.552 { 00:38:12.552 "method": "keyring_file_add_key", 00:38:12.552 "params": { 00:38:12.552 "name": "key0", 00:38:12.552 "path": "/tmp/tmp.EOr78vznjF" 00:38:12.552 } 00:38:12.552 }, 00:38:12.552 { 00:38:12.552 "method": "keyring_file_add_key", 00:38:12.552 "params": { 00:38:12.552 "name": "key1", 00:38:12.552 "path": "/tmp/tmp.bj1bNoIUn5" 00:38:12.552 } 00:38:12.552 } 00:38:12.552 ] 00:38:12.552 }, 00:38:12.552 { 00:38:12.552 "subsystem": "iobuf", 00:38:12.552 "config": [ 00:38:12.552 { 00:38:12.552 "method": "iobuf_set_options", 00:38:12.552 "params": { 00:38:12.552 "small_pool_count": 8192, 00:38:12.552 "large_pool_count": 1024, 00:38:12.552 "small_bufsize": 8192, 00:38:12.552 "large_bufsize": 135168, 00:38:12.552 "enable_numa": false 00:38:12.552 } 00:38:12.552 } 00:38:12.552 ] 00:38:12.552 }, 00:38:12.552 { 00:38:12.552 "subsystem": "sock", 00:38:12.552 "config": [ 00:38:12.552 { 00:38:12.552 "method": "sock_set_default_impl", 00:38:12.552 "params": { 00:38:12.552 "impl_name": "posix" 00:38:12.552 } 00:38:12.552 }, 00:38:12.552 { 00:38:12.552 "method": "sock_impl_set_options", 00:38:12.552 "params": { 00:38:12.552 "impl_name": "ssl", 00:38:12.552 "recv_buf_size": 4096, 00:38:12.552 "send_buf_size": 4096, 00:38:12.552 "enable_recv_pipe": true, 00:38:12.552 "enable_quickack": false, 00:38:12.552 "enable_placement_id": 0, 00:38:12.552 "enable_zerocopy_send_server": true, 00:38:12.552 "enable_zerocopy_send_client": false, 00:38:12.552 "zerocopy_threshold": 0, 00:38:12.552 "tls_version": 0, 00:38:12.552 "enable_ktls": false 00:38:12.552 } 00:38:12.552 }, 00:38:12.552 { 00:38:12.552 "method": "sock_impl_set_options", 00:38:12.552 "params": { 00:38:12.552 "impl_name": "posix", 00:38:12.552 "recv_buf_size": 2097152, 00:38:12.552 "send_buf_size": 2097152, 00:38:12.552 "enable_recv_pipe": true, 00:38:12.552 "enable_quickack": false, 00:38:12.552 "enable_placement_id": 0, 00:38:12.552 "enable_zerocopy_send_server": true, 00:38:12.552 "enable_zerocopy_send_client": false, 00:38:12.552 "zerocopy_threshold": 0, 00:38:12.552 "tls_version": 0, 00:38:12.552 "enable_ktls": false 00:38:12.552 } 00:38:12.552 } 00:38:12.552 ] 00:38:12.552 }, 00:38:12.552 { 00:38:12.552 "subsystem": "vmd", 00:38:12.552 "config": [] 00:38:12.552 }, 00:38:12.552 { 00:38:12.552 "subsystem": "accel", 00:38:12.552 "config": [ 00:38:12.552 { 00:38:12.552 "method": "accel_set_options", 00:38:12.552 "params": { 00:38:12.552 "small_cache_size": 128, 00:38:12.552 "large_cache_size": 16, 00:38:12.552 "task_count": 2048, 00:38:12.552 "sequence_count": 2048, 00:38:12.552 "buf_count": 2048 00:38:12.552 } 00:38:12.552 } 00:38:12.552 ] 00:38:12.552 }, 00:38:12.552 { 00:38:12.552 "subsystem": "bdev", 00:38:12.552 "config": [ 00:38:12.552 { 00:38:12.552 "method": "bdev_set_options", 00:38:12.552 "params": { 00:38:12.552 "bdev_io_pool_size": 65535, 00:38:12.552 "bdev_io_cache_size": 256, 00:38:12.552 "bdev_auto_examine": true, 00:38:12.552 "iobuf_small_cache_size": 128, 00:38:12.552 "iobuf_large_cache_size": 16 00:38:12.552 } 00:38:12.552 }, 00:38:12.552 { 00:38:12.552 "method": "bdev_raid_set_options", 00:38:12.552 "params": { 00:38:12.552 "process_window_size_kb": 1024, 00:38:12.552 "process_max_bandwidth_mb_sec": 0 00:38:12.552 } 00:38:12.552 }, 00:38:12.552 { 00:38:12.552 "method": "bdev_iscsi_set_options", 00:38:12.552 "params": { 00:38:12.552 "timeout_sec": 30 00:38:12.552 } 00:38:12.552 }, 00:38:12.552 { 00:38:12.552 "method": "bdev_nvme_set_options", 00:38:12.552 "params": { 00:38:12.552 "action_on_timeout": "none", 00:38:12.552 "timeout_us": 0, 00:38:12.552 "timeout_admin_us": 0, 00:38:12.552 "keep_alive_timeout_ms": 10000, 00:38:12.552 "arbitration_burst": 0, 00:38:12.552 "low_priority_weight": 0, 00:38:12.552 "medium_priority_weight": 0, 00:38:12.552 "high_priority_weight": 0, 00:38:12.552 "nvme_adminq_poll_period_us": 10000, 00:38:12.552 "nvme_ioq_poll_period_us": 0, 00:38:12.552 "io_queue_requests": 512, 00:38:12.552 "delay_cmd_submit": true, 00:38:12.552 "transport_retry_count": 4, 00:38:12.552 "bdev_retry_count": 3, 00:38:12.552 "transport_ack_timeout": 0, 00:38:12.552 "ctrlr_loss_timeout_sec": 0, 00:38:12.552 "reconnect_delay_sec": 0, 00:38:12.552 "fast_io_fail_timeout_sec": 0, 00:38:12.552 "disable_auto_failback": false, 00:38:12.552 "generate_uuids": false, 00:38:12.552 "transport_tos": 0, 00:38:12.552 "nvme_error_stat": false, 00:38:12.552 "rdma_srq_size": 0, 00:38:12.552 "io_path_stat": false, 00:38:12.552 "allow_accel_sequence": false, 00:38:12.552 "rdma_max_cq_size": 0, 00:38:12.552 "rdma_cm_event_timeout_ms": 0, 00:38:12.552 "dhchap_digests": [ 00:38:12.552 "sha256", 00:38:12.552 "sha384", 00:38:12.552 "sha512" 00:38:12.552 ], 00:38:12.552 "dhchap_dhgroups": [ 00:38:12.552 "null", 00:38:12.552 "ffdhe2048", 00:38:12.552 "ffdhe3072", 00:38:12.552 "ffdhe4096", 00:38:12.552 "ffdhe6144", 00:38:12.552 "ffdhe8192" 00:38:12.552 ] 00:38:12.552 } 00:38:12.552 }, 00:38:12.552 { 00:38:12.552 "method": "bdev_nvme_attach_controller", 00:38:12.552 "params": { 00:38:12.552 "name": "nvme0", 00:38:12.552 "trtype": "TCP", 00:38:12.552 "adrfam": "IPv4", 00:38:12.552 "traddr": "127.0.0.1", 00:38:12.552 "trsvcid": "4420", 00:38:12.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:12.552 "prchk_reftag": false, 00:38:12.552 "prchk_guard": false, 00:38:12.552 "ctrlr_loss_timeout_sec": 0, 00:38:12.552 "reconnect_delay_sec": 0, 00:38:12.553 "fast_io_fail_timeout_sec": 0, 00:38:12.553 "psk": "key0", 00:38:12.553 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:12.553 "hdgst": false, 00:38:12.553 "ddgst": false, 00:38:12.553 "multipath": "multipath" 00:38:12.553 } 00:38:12.553 }, 00:38:12.553 { 00:38:12.553 "method": "bdev_nvme_set_hotplug", 00:38:12.553 "params": { 00:38:12.553 "period_us": 100000, 00:38:12.553 "enable": false 00:38:12.553 } 00:38:12.553 }, 00:38:12.553 { 00:38:12.553 "method": "bdev_wait_for_examine" 00:38:12.553 } 00:38:12.553 ] 00:38:12.553 }, 00:38:12.553 { 00:38:12.553 "subsystem": "nbd", 00:38:12.553 "config": [] 00:38:12.553 } 00:38:12.553 ] 00:38:12.553 }' 00:38:12.553 [2024-11-06 15:50:30.461902] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:38:12.553 [2024-11-06 15:50:30.461957] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4113255 ] 00:38:12.813 [2024-11-06 15:50:30.547496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:12.813 [2024-11-06 15:50:30.576325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:12.813 [2024-11-06 15:50:30.720690] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:13.384 15:50:31 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:13.384 15:50:31 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:13.384 15:50:31 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:13.384 15:50:31 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:13.384 15:50:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:13.644 15:50:31 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:13.644 15:50:31 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:13.644 15:50:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:13.644 15:50:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:13.644 15:50:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:13.644 15:50:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:13.644 15:50:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:13.904 15:50:31 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:13.904 15:50:31 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:13.905 15:50:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:13.905 15:50:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:13.905 15:50:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:13.905 15:50:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:13.905 15:50:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:13.905 15:50:31 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:13.905 15:50:31 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:13.905 15:50:31 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:13.905 15:50:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:14.165 15:50:31 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:14.165 15:50:31 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:14.165 15:50:31 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.EOr78vznjF /tmp/tmp.bj1bNoIUn5 00:38:14.165 15:50:31 keyring_file -- keyring/file.sh@20 -- # killprocess 4113255 00:38:14.165 15:50:31 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 4113255 ']' 00:38:14.165 15:50:31 keyring_file -- common/autotest_common.sh@956 -- # kill -0 4113255 00:38:14.165 15:50:31 keyring_file -- common/autotest_common.sh@957 -- # uname 00:38:14.165 15:50:31 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:14.165 15:50:31 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4113255 00:38:14.165 15:50:32 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:14.165 15:50:32 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:14.165 15:50:32 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4113255' 00:38:14.165 killing process with pid 4113255 00:38:14.165 15:50:32 keyring_file -- common/autotest_common.sh@971 -- # kill 4113255 00:38:14.165 Received shutdown signal, test time was about 1.000000 seconds 00:38:14.165 00:38:14.165 Latency(us) 00:38:14.165 [2024-11-06T14:50:32.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:14.165 [2024-11-06T14:50:32.148Z] =================================================================================================================== 00:38:14.165 [2024-11-06T14:50:32.148Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:14.165 15:50:32 keyring_file -- common/autotest_common.sh@976 -- # wait 4113255 00:38:14.425 15:50:32 keyring_file -- keyring/file.sh@21 -- # killprocess 4111395 00:38:14.425 15:50:32 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 4111395 ']' 00:38:14.425 15:50:32 keyring_file -- common/autotest_common.sh@956 -- # kill -0 4111395 00:38:14.425 15:50:32 keyring_file -- common/autotest_common.sh@957 -- # uname 00:38:14.425 15:50:32 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:14.425 15:50:32 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4111395 00:38:14.425 15:50:32 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:14.425 15:50:32 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:14.425 15:50:32 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4111395' 00:38:14.425 killing process with pid 4111395 00:38:14.425 15:50:32 keyring_file -- common/autotest_common.sh@971 -- # kill 4111395 00:38:14.425 15:50:32 keyring_file -- common/autotest_common.sh@976 -- # wait 4111395 00:38:14.686 00:38:14.686 real 0m11.983s 00:38:14.686 user 0m28.932s 00:38:14.686 sys 0m2.694s 00:38:14.686 15:50:32 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:14.686 15:50:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:14.686 ************************************ 00:38:14.686 END TEST keyring_file 00:38:14.686 ************************************ 00:38:14.686 15:50:32 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:38:14.686 15:50:32 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:14.686 15:50:32 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:38:14.686 15:50:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:14.686 15:50:32 -- common/autotest_common.sh@10 -- # set +x 00:38:14.686 ************************************ 00:38:14.686 START TEST keyring_linux 00:38:14.686 ************************************ 00:38:14.686 15:50:32 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:14.686 Joined session keyring: 89174406 00:38:14.686 * Looking for test storage... 00:38:14.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:14.686 15:50:32 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:14.686 15:50:32 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:38:14.686 15:50:32 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:14.946 15:50:32 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:14.946 15:50:32 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:14.946 15:50:32 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:14.946 15:50:32 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:14.946 15:50:32 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:14.946 15:50:32 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:14.946 15:50:32 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:14.946 15:50:32 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:14.946 15:50:32 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:14.946 15:50:32 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:14.946 15:50:32 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:14.946 15:50:32 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:14.946 15:50:32 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:14.946 15:50:32 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:14.947 15:50:32 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:14.947 15:50:32 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:14.947 15:50:32 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:14.947 15:50:32 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:14.947 15:50:32 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:14.947 15:50:32 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:14.947 15:50:32 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:14.947 15:50:32 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:14.947 15:50:32 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:14.947 15:50:32 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:14.947 15:50:32 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:14.947 15:50:32 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:14.947 15:50:32 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:14.947 15:50:32 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:14.947 15:50:32 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:14.947 15:50:32 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:14.947 15:50:32 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:14.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.947 --rc genhtml_branch_coverage=1 00:38:14.947 --rc genhtml_function_coverage=1 00:38:14.947 --rc genhtml_legend=1 00:38:14.947 --rc geninfo_all_blocks=1 00:38:14.947 --rc geninfo_unexecuted_blocks=1 00:38:14.947 00:38:14.947 ' 00:38:14.947 15:50:32 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:14.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.947 --rc genhtml_branch_coverage=1 00:38:14.947 --rc genhtml_function_coverage=1 00:38:14.947 --rc genhtml_legend=1 00:38:14.947 --rc geninfo_all_blocks=1 00:38:14.947 --rc geninfo_unexecuted_blocks=1 00:38:14.947 00:38:14.947 ' 00:38:14.947 15:50:32 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:14.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.947 --rc genhtml_branch_coverage=1 00:38:14.947 --rc genhtml_function_coverage=1 00:38:14.947 --rc genhtml_legend=1 00:38:14.947 --rc geninfo_all_blocks=1 00:38:14.947 --rc geninfo_unexecuted_blocks=1 00:38:14.947 00:38:14.947 ' 00:38:14.947 15:50:32 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:14.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.947 --rc genhtml_branch_coverage=1 00:38:14.947 --rc genhtml_function_coverage=1 00:38:14.947 --rc genhtml_legend=1 00:38:14.947 --rc geninfo_all_blocks=1 00:38:14.947 --rc geninfo_unexecuted_blocks=1 00:38:14.947 00:38:14.947 ' 00:38:14.947 15:50:32 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:14.947 15:50:32 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:14.947 15:50:32 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:14.947 15:50:32 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:14.947 15:50:32 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:14.947 15:50:32 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:14.947 15:50:32 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.947 15:50:32 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.947 15:50:32 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.947 15:50:32 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:14.947 15:50:32 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:14.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:14.947 15:50:32 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:14.947 15:50:32 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:14.947 15:50:32 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:14.947 15:50:32 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:14.947 15:50:32 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:14.947 15:50:32 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:14.947 15:50:32 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:14.947 15:50:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:14.947 15:50:32 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:14.947 15:50:32 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:14.947 15:50:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:14.947 15:50:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:14.947 15:50:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:14.947 15:50:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:14.947 15:50:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:14.947 /tmp/:spdk-test:key0 00:38:14.947 15:50:32 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:14.947 15:50:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:14.947 15:50:32 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:14.947 15:50:32 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:14.947 15:50:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:14.947 15:50:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:14.947 15:50:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:14.947 15:50:32 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:14.947 15:50:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:14.947 15:50:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:14.947 /tmp/:spdk-test:key1 00:38:14.947 15:50:32 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:14.947 15:50:32 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=4113705 00:38:14.947 15:50:32 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 4113705 00:38:14.947 15:50:32 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 4113705 ']' 00:38:14.947 15:50:32 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:14.948 15:50:32 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:14.948 15:50:32 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:14.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:14.948 15:50:32 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:14.948 15:50:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:14.948 [2024-11-06 15:50:32.878768] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:38:14.948 [2024-11-06 15:50:32.878841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4113705 ] 00:38:15.208 [2024-11-06 15:50:32.970776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:15.208 [2024-11-06 15:50:33.006247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:15.778 15:50:33 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:15.778 15:50:33 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:38:15.778 15:50:33 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:15.778 15:50:33 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.778 15:50:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:15.778 [2024-11-06 15:50:33.688770] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:15.778 null0 00:38:15.778 [2024-11-06 15:50:33.720824] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:15.778 [2024-11-06 15:50:33.721169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:15.778 15:50:33 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.778 15:50:33 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:15.778 1025486007 00:38:15.778 15:50:33 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:15.778 1057756178 00:38:15.778 15:50:33 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=4114028 00:38:15.778 15:50:33 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 4114028 /var/tmp/bperf.sock 00:38:15.778 15:50:33 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:15.778 15:50:33 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 4114028 ']' 00:38:15.778 15:50:33 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:15.778 15:50:33 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:15.778 15:50:33 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:15.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:15.778 15:50:33 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:15.778 15:50:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:16.039 [2024-11-06 15:50:33.801433] Starting SPDK v25.01-pre git sha1 924c8133b / DPDK 24.03.0 initialization... 00:38:16.039 [2024-11-06 15:50:33.801481] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4114028 ] 00:38:16.039 [2024-11-06 15:50:33.883420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:16.039 [2024-11-06 15:50:33.913335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:16.609 15:50:34 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:16.609 15:50:34 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:38:16.609 15:50:34 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:16.609 15:50:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:16.869 15:50:34 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:16.869 15:50:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:17.131 15:50:34 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:17.131 15:50:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:17.391 [2024-11-06 15:50:35.134934] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:17.391 nvme0n1 00:38:17.391 15:50:35 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:17.391 15:50:35 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:17.391 15:50:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:17.391 15:50:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:17.391 15:50:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:17.391 15:50:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:17.652 15:50:35 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:17.652 15:50:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:17.652 15:50:35 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:17.652 15:50:35 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:17.652 15:50:35 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:17.652 15:50:35 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:17.652 15:50:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:17.652 15:50:35 keyring_linux -- keyring/linux.sh@25 -- # sn=1025486007 00:38:17.652 15:50:35 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:17.652 15:50:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:17.652 15:50:35 keyring_linux -- keyring/linux.sh@26 -- # [[ 1025486007 == \1\0\2\5\4\8\6\0\0\7 ]] 00:38:17.652 15:50:35 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1025486007 00:38:17.652 15:50:35 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:17.652 15:50:35 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:17.963 Running I/O for 1 seconds... 00:38:18.905 24590.00 IOPS, 96.05 MiB/s 00:38:18.905 Latency(us) 00:38:18.905 [2024-11-06T14:50:36.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:18.905 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:18.905 nvme0n1 : 1.01 24589.45 96.05 0.00 0.00 5190.10 1761.28 6389.76 00:38:18.905 [2024-11-06T14:50:36.888Z] =================================================================================================================== 00:38:18.905 [2024-11-06T14:50:36.888Z] Total : 24589.45 96.05 0.00 0.00 5190.10 1761.28 6389.76 00:38:18.905 { 00:38:18.905 "results": [ 00:38:18.905 { 00:38:18.905 "job": "nvme0n1", 00:38:18.905 "core_mask": "0x2", 00:38:18.905 "workload": "randread", 00:38:18.905 "status": "finished", 00:38:18.905 "queue_depth": 128, 00:38:18.905 "io_size": 4096, 00:38:18.905 "runtime": 1.005228, 00:38:18.905 "iops": 24589.446374354873, 00:38:18.905 "mibps": 96.05252489982372, 00:38:18.905 "io_failed": 0, 00:38:18.905 "io_timeout": 0, 00:38:18.905 "avg_latency_us": 5190.101067508158, 00:38:18.905 "min_latency_us": 1761.28, 00:38:18.905 "max_latency_us": 6389.76 00:38:18.905 } 00:38:18.905 ], 00:38:18.905 "core_count": 1 00:38:18.905 } 00:38:18.905 15:50:36 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:18.905 15:50:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:19.165 15:50:36 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:19.165 15:50:36 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:19.165 15:50:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:19.165 15:50:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:19.165 15:50:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:19.165 15:50:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:19.165 15:50:37 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:19.165 15:50:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:19.165 15:50:37 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:19.165 15:50:37 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:19.165 15:50:37 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:38:19.165 15:50:37 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:19.165 15:50:37 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:19.165 15:50:37 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:19.165 15:50:37 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:19.165 15:50:37 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:19.165 15:50:37 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:19.165 15:50:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:19.426 [2024-11-06 15:50:37.260261] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:19.426 [2024-11-06 15:50:37.260720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c89a60 (107): Transport endpoint is not connected 00:38:19.426 [2024-11-06 15:50:37.261718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c89a60 (9): Bad file descriptor 00:38:19.426 [2024-11-06 15:50:37.262720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:19.426 [2024-11-06 15:50:37.262727] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:19.426 [2024-11-06 15:50:37.262734] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:19.426 [2024-11-06 15:50:37.262740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:19.426 request: 00:38:19.426 { 00:38:19.426 "name": "nvme0", 00:38:19.426 "trtype": "tcp", 00:38:19.426 "traddr": "127.0.0.1", 00:38:19.426 "adrfam": "ipv4", 00:38:19.426 "trsvcid": "4420", 00:38:19.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:19.426 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:19.426 "prchk_reftag": false, 00:38:19.426 "prchk_guard": false, 00:38:19.426 "hdgst": false, 00:38:19.426 "ddgst": false, 00:38:19.426 "psk": ":spdk-test:key1", 00:38:19.426 "allow_unrecognized_csi": false, 00:38:19.426 "method": "bdev_nvme_attach_controller", 00:38:19.426 "req_id": 1 00:38:19.426 } 00:38:19.426 Got JSON-RPC error response 00:38:19.426 response: 00:38:19.426 { 00:38:19.426 "code": -5, 00:38:19.426 "message": "Input/output error" 00:38:19.426 } 00:38:19.426 15:50:37 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:38:19.426 15:50:37 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:19.426 15:50:37 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:19.426 15:50:37 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:19.426 15:50:37 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:19.426 15:50:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:19.426 15:50:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:19.426 15:50:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:19.426 15:50:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:19.426 15:50:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:19.426 15:50:37 keyring_linux -- keyring/linux.sh@33 -- # sn=1025486007 00:38:19.426 15:50:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1025486007 00:38:19.426 1 links removed 00:38:19.426 15:50:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:19.426 15:50:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:19.426 15:50:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:19.426 15:50:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:19.426 15:50:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:19.426 15:50:37 keyring_linux -- keyring/linux.sh@33 -- # sn=1057756178 00:38:19.426 15:50:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1057756178 00:38:19.426 1 links removed 00:38:19.426 15:50:37 keyring_linux -- keyring/linux.sh@41 -- # killprocess 4114028 00:38:19.426 15:50:37 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 4114028 ']' 00:38:19.426 15:50:37 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 4114028 00:38:19.426 15:50:37 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:38:19.426 15:50:37 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:19.426 15:50:37 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4114028 00:38:19.426 15:50:37 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:19.426 15:50:37 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:19.426 15:50:37 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4114028' 00:38:19.426 killing process with pid 4114028 00:38:19.426 15:50:37 keyring_linux -- common/autotest_common.sh@971 -- # kill 4114028 00:38:19.426 Received shutdown signal, test time was about 1.000000 seconds 00:38:19.426 00:38:19.426 Latency(us) 00:38:19.426 [2024-11-06T14:50:37.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:19.426 [2024-11-06T14:50:37.409Z] =================================================================================================================== 00:38:19.426 [2024-11-06T14:50:37.409Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:19.426 15:50:37 keyring_linux -- common/autotest_common.sh@976 -- # wait 4114028 00:38:19.686 15:50:37 keyring_linux -- keyring/linux.sh@42 -- # killprocess 4113705 00:38:19.686 15:50:37 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 4113705 ']' 00:38:19.686 15:50:37 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 4113705 00:38:19.686 15:50:37 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:38:19.686 15:50:37 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:19.686 15:50:37 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4113705 00:38:19.686 15:50:37 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:19.686 15:50:37 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:19.686 15:50:37 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4113705' 00:38:19.686 killing process with pid 4113705 00:38:19.686 15:50:37 keyring_linux -- common/autotest_common.sh@971 -- # kill 4113705 00:38:19.686 15:50:37 keyring_linux -- common/autotest_common.sh@976 -- # wait 4113705 00:38:19.948 00:38:19.948 real 0m5.220s 00:38:19.948 user 0m9.738s 00:38:19.948 sys 0m1.420s 00:38:19.948 15:50:37 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:19.948 15:50:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:19.948 ************************************ 00:38:19.948 END TEST keyring_linux 00:38:19.948 ************************************ 00:38:19.948 15:50:37 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:38:19.948 15:50:37 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:19.948 15:50:37 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:19.948 15:50:37 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:38:19.948 15:50:37 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:38:19.948 15:50:37 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:38:19.948 15:50:37 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:19.948 15:50:37 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:19.948 15:50:37 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:19.948 15:50:37 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:38:19.948 15:50:37 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:19.948 15:50:37 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:38:19.948 15:50:37 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:19.948 15:50:37 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:19.948 15:50:37 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:38:19.948 15:50:37 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:38:19.948 15:50:37 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:38:19.948 15:50:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:19.948 15:50:37 -- common/autotest_common.sh@10 -- # set +x 00:38:19.948 15:50:37 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:38:19.948 15:50:37 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:38:19.948 15:50:37 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:38:19.948 15:50:37 -- common/autotest_common.sh@10 -- # set +x 00:38:28.163 INFO: APP EXITING 00:38:28.163 INFO: killing all VMs 00:38:28.163 INFO: killing vhost app 00:38:28.163 INFO: EXIT DONE 00:38:30.704 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:38:30.964 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:38:30.964 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:38:30.964 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:38:30.964 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:38:30.964 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:38:30.964 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:38:30.964 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:38:30.964 0000:65:00.0 (144d a80a): Already using the nvme driver 00:38:30.964 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:38:30.964 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:38:30.964 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:38:31.224 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:38:31.224 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:38:31.224 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:38:31.224 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:38:31.224 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:35.427 Cleaning 00:38:35.427 Removing: /var/run/dpdk/spdk0/config 00:38:35.427 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:35.427 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:35.427 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:35.427 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:35.427 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:35.427 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:35.427 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:35.427 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:35.427 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:35.427 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:35.427 Removing: /var/run/dpdk/spdk1/config 00:38:35.427 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:35.427 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:35.427 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:35.427 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:35.427 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:35.427 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:35.427 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:35.427 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:35.427 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:35.427 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:35.427 Removing: /var/run/dpdk/spdk2/config 00:38:35.428 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:35.428 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:35.428 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:35.428 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:35.428 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:35.428 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:35.428 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:35.428 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:35.428 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:35.428 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:35.428 Removing: /var/run/dpdk/spdk3/config 00:38:35.428 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:35.428 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:35.428 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:35.428 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:35.428 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:35.428 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:35.428 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:35.428 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:35.428 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:35.428 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:35.428 Removing: /var/run/dpdk/spdk4/config 00:38:35.428 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:35.428 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:35.428 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:35.428 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:35.428 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:35.428 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:35.428 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:35.428 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:35.428 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:35.428 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:35.428 Removing: /dev/shm/bdev_svc_trace.1 00:38:35.428 Removing: /dev/shm/nvmf_trace.0 00:38:35.428 Removing: /dev/shm/spdk_tgt_trace.pid3532380 00:38:35.428 Removing: /var/run/dpdk/spdk0 00:38:35.428 Removing: /var/run/dpdk/spdk1 00:38:35.428 Removing: /var/run/dpdk/spdk2 00:38:35.428 Removing: /var/run/dpdk/spdk3 00:38:35.428 Removing: /var/run/dpdk/spdk4 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3530892 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3532380 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3533234 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3534271 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3534609 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3535681 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3535828 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3536151 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3537284 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3538075 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3538445 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3538790 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3539135 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3539434 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3539725 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3540082 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3540466 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3541541 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3545067 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3545371 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3545725 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3545867 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3546244 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3546580 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3546954 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3547020 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3547331 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3547667 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3547730 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3548038 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3548488 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3548840 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3549239 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3553811 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3559215 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3571841 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3572580 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3577813 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3578288 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3583472 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3590589 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3593886 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3606363 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3617410 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3619604 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3621218 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3642426 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3647222 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3704651 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3711077 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3718281 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3726217 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3726220 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3727333 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3728335 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3729804 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3730652 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3730792 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3730993 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3731148 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3731158 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3732159 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3733166 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3734170 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3734840 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3734844 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3735183 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3736614 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3737697 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3747717 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3782338 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3787927 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3789792 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3792031 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3792376 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3792653 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3792832 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3793681 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3795804 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3797182 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3797678 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3800312 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3801052 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3802029 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3806996 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3814312 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3814313 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3814314 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3819095 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3829491 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3834315 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3841584 00:38:35.428 Removing: /var/run/dpdk/spdk_pid3843077 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3844719 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3846448 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3852177 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3857384 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3862446 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3872471 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3872477 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3877567 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3877896 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3878112 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3878581 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3878588 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3884320 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3884828 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3890356 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3893572 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3900126 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3906701 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3916970 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3926041 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3926093 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3949217 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3949903 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3950692 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3951494 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3952503 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3953338 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3954024 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3954710 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3959833 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3960135 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3967462 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3967606 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3974800 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3980031 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3991630 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3992376 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3997505 00:38:35.689 Removing: /var/run/dpdk/spdk_pid3997854 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4002929 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4009688 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4012754 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4025621 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4036308 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4038314 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4039327 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4059020 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4063779 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4066963 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4074874 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4074882 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4081276 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4083696 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4085967 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4087291 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4089678 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4091152 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4101196 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4101862 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4102485 00:38:35.689 Removing: /var/run/dpdk/spdk_pid4105333 00:38:35.950 Removing: /var/run/dpdk/spdk_pid4105831 00:38:35.950 Removing: /var/run/dpdk/spdk_pid4106516 00:38:35.950 Removing: /var/run/dpdk/spdk_pid4111395 00:38:35.950 Removing: /var/run/dpdk/spdk_pid4111442 00:38:35.950 Removing: /var/run/dpdk/spdk_pid4113255 00:38:35.950 Removing: /var/run/dpdk/spdk_pid4113705 00:38:35.950 Removing: /var/run/dpdk/spdk_pid4114028 00:38:35.950 Clean 00:38:35.950 15:50:53 -- common/autotest_common.sh@1451 -- # return 0 00:38:35.950 15:50:53 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:38:35.950 15:50:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:35.950 15:50:53 -- common/autotest_common.sh@10 -- # set +x 00:38:35.950 15:50:53 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:38:35.950 15:50:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:35.950 15:50:53 -- common/autotest_common.sh@10 -- # set +x 00:38:35.950 15:50:53 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:35.950 15:50:53 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:35.950 15:50:53 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:35.951 15:50:53 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:38:35.951 15:50:53 -- spdk/autotest.sh@394 -- # hostname 00:38:35.951 15:50:53 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-13 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:36.211 geninfo: WARNING: invalid characters removed from testname! 00:39:02.787 15:51:19 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:04.696 15:51:22 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:06.604 15:51:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:07.987 15:51:25 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:10.528 15:51:27 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:11.908 15:51:29 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:13.289 15:51:31 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:13.289 15:51:31 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:13.289 15:51:31 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:13.289 15:51:31 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:13.289 15:51:31 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:13.289 15:51:31 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:13.549 + [[ -n 3445425 ]] 00:39:13.549 + sudo kill 3445425 00:39:13.560 [Pipeline] } 00:39:13.575 [Pipeline] // stage 00:39:13.581 [Pipeline] } 00:39:13.595 [Pipeline] // timeout 00:39:13.600 [Pipeline] } 00:39:13.614 [Pipeline] // catchError 00:39:13.620 [Pipeline] } 00:39:13.635 [Pipeline] // wrap 00:39:13.641 [Pipeline] } 00:39:13.655 [Pipeline] // catchError 00:39:13.665 [Pipeline] stage 00:39:13.668 [Pipeline] { (Epilogue) 00:39:13.681 [Pipeline] catchError 00:39:13.683 [Pipeline] { 00:39:13.695 [Pipeline] echo 00:39:13.697 Cleanup processes 00:39:13.703 [Pipeline] sh 00:39:13.990 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:13.990 4127658 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:14.006 [Pipeline] sh 00:39:14.289 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:14.290 ++ grep -v 'sudo pgrep' 00:39:14.290 ++ awk '{print $1}' 00:39:14.290 + sudo kill -9 00:39:14.290 + true 00:39:14.302 [Pipeline] sh 00:39:14.591 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:26.830 [Pipeline] sh 00:39:27.119 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:27.119 Artifacts sizes are good 00:39:27.135 [Pipeline] archiveArtifacts 00:39:27.143 Archiving artifacts 00:39:27.274 [Pipeline] sh 00:39:27.563 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:27.579 [Pipeline] cleanWs 00:39:27.589 [WS-CLEANUP] Deleting project workspace... 00:39:27.589 [WS-CLEANUP] Deferred wipeout is used... 00:39:27.596 [WS-CLEANUP] done 00:39:27.599 [Pipeline] } 00:39:27.617 [Pipeline] // catchError 00:39:27.629 [Pipeline] sh 00:39:28.001 + logger -p user.info -t JENKINS-CI 00:39:28.044 [Pipeline] } 00:39:28.058 [Pipeline] // stage 00:39:28.063 [Pipeline] } 00:39:28.077 [Pipeline] // node 00:39:28.083 [Pipeline] End of Pipeline 00:39:28.119 Finished: SUCCESS